Google Associate Cloud Engineer Exam Questions PDF

Summary

This document contains sample exam questions and answers for the Google Associate Cloud Engineer certification. The questions cover topics like Compute Engine, networking, and storage. The example questions help readers prepare for the associate cloud engineer exam.

Full Transcript

Certy IQ Premium exam material Get certification quickly with the CertyIQ Premium exam material. Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates First attempt guaranteed success. https://www.CertyIQ.com Google...

Certy IQ Premium exam material Get certification quickly with the CertyIQ Premium exam material. Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates First attempt guaranteed success. https://www.CertyIQ.com Google (Associate Cloud Engineer) Associate Cloud Engineer Total: 285 Questions Link: https://certyiq.com/papers/google/associate-cloud-engineer Question: 1 CertyIQ Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance. What should you do? A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. B. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. C. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the compute.osAdminLogin role to the Google group corresponding to this team. D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance. Answer: C Explanation: C is correct - https://cloud.google.com/compute/docs/instances/managing-instance-access Question: 2 CertyIQ You need to create a custom VPC with a single subnet. The subnet's range must be as large as possible. Which range should you use? A. 0.0.0.0/0 B. 10.0.0.0/8 C. 172.16.0.0/12 D. 192.168.0.0/16 Answer: B Explanation: B is correct Use 10.0.0.0/8 CIDR range. is the right answer. The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918) and adhered to by all cloud providers. The supported internal IP Address ranges are 1. 24-bit block 10.0.0.0/8 (16777216 IP Addresses) 2. 20-bit block 172.16.0.0/12 (1048576 IP Addresses) 3. 16-bit block 192.168.0.0/16 (65536 IP Addresses) 10.0.0.0/8 gives you the most extensive range - 16777216 IP Addresses Question: 3 CertyIQ You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery. What should you do? A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. B. Select Cloud SQL (MySQL). Select the create failover replicas option. C. Select Cloud Spanner. Set up your instance with 2 nodes. D. Select Cloud Spanner. Set up your instance as multi-regional. Answer: A Explanation: A is Correct. You must enable binary logging to use point-in-time recovery. Enabling binary logging causes a slight reduction in write performance. Reference: https://cloud.google.com/sql/docs/mysql/backup-recovery/backups Question: 4 CertyIQ You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do? A. Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP) B. Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10. C. Create a managed instance group. Set the Autohealing health check to healthy (HTTP) D. Create a managed instance group. Verify that the autoscaling setting is on. Answer: C Explanation: Pro Tip: Use separate health checks for load balancing and for auto healing. Health checks for load balancing detect unresponsive instances and direct traffic away from them. Health checks for auto healing detect and recreate failed instances, so they should be less aggressive than load balancing health checks. Using the same health check for these services would remove the distinction between unresponsive instances and failed instances, causing unnecessary latency and unavailability for your users Reference: https://cloud.google.com/compute/docs/tutorials/high-availability-autohealing Question: 5 CertyIQ You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps. What should you do? A. Use gcloud config configurations describe to review the output. B. Use gcloud config configurations activate and gcloud config list to review the output. C. Use kubectl config get-contexts to review the output. D. Use kubectl config use-context and kubectl config view to review the output. Answer: D Explanation: D is correct. A lot details mentioned in this group. Here I only say about eliminating answers. As we go down to between C and D. The question is want to review a inactive configure. So, to me, C is viewing info about all configure while D is apply a specific config and viewing it. So I eliminate C to go with D Reference: https://medium.com/google-cloud/kubernetes-engine-kubectl-config-b6270d2b656c Question: 6 CertyIQ Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google's recommended practices. Which storage option should you use? A. Multi-Regional Storage B. Regional Storage C. Nearline Storage D. Coldline Storage Answer: D Explanation: Cloud Storage Cold line: a low-latency storage class for long-term archiving Cold line is a new Cloud Storage class designed for long-term archival and disaster recovery. Cold line is perfect for the archival needs of big data or multimedia content, allowing businesses to archive years of data. Cold line provides fast and instant (millisecond) access to data and changes the way that companies think about storing and accessing their cold data. Reference: https://cloud.google.com/storage/docs/storage-classes#nearline Question: 7 CertyIQ Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do? A. Contact [email protected] with your bank account details and request a corporate billing account for your company. B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone. C. In the Google Platform Console, go to the Resource Manage and move all projects to the root Organizarion. D. In the Google Cloud Platform Console, create a new billing account and set up a payment method. Answer: D Explanation: The answer is D. Carefully read this sentence "Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards". all projects are already in the root org. no need to move especially. but the thing is they are paying individually. So need to solve this should create a new account and move it to all projects. Reference: https://www.whizlabs.com/blog/google-cloud-interview-questions/ Question: 8 CertyIQ You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to change the configuration of the application and want the application to be able to reach the licensing server. What should you do? A. Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server. B. Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server. C. Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server. D. Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address. Answer: A Explanation: A IP 10.0.3.21 is internal by default, and to ensure that it will be static non-changing it should be selected as static internal ip address. Question: 9 CertyIQ You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times. Which scaling type should you use? A. Manual Scaling with 3 instances. B. Basic Scaling with min_instances set to 3. C. Basic Scaling with max_instances set to 3. D. Automatic Scaling with min_idle_instances set to 3. Answer: D Explanation: D is correct "App Engine calculates the number of instances necessary to serve your current application traffic based on scaling settings such as target_ cpu_ utilization and target_ through put_ utilization. Setting min_ idle_ instances specifies the number of instances to run in addition to this calculated number. For example, if App Engine calculates that 5 instances are necessary to serve traffic, and min_ idle_ instances is set to 2, App Engine will run 7 instances (5, calculated based on traffic, plus 2 additional per min_idle_ instances)." Reference: https://cloud.google.com/appengine/docs/standard/go/config/appref Question: 10 CertyIQ You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps. What should you do? A. Use gcloud iam roles copy and specify the production project as the destination project. B. Use gcloud iam roles copy and specify your organization as the destination organization. C. In the Google Cloud Platform Console, use the 'create role from role' functionality. D. In the Google Cloud Platform Console, use the 'create role' functionality and select all applicable permissions. Answer: A Explanation: Use gcloud iam roles copy and specify the production project as the destination project. Reference: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy Question: 11 CertyIQ You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google's recommended practices. Which method should you use? A. Deployment Manager B. Cloud Composer C. Managed Instance Group D. Unmanaged Instance Group Answer: A Explanation: Managed Instance Groups don't support Configuration file in order to provision VM instances Question: 12 CertyIQ You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do? A. Use kubectl app deploy. B. Use gcloud app deploy. C. Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file. D. Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file. Answer: C Explanation: C is correct. A can be eliminated because kubectl app * is not a valid command B can be eliminated because gcloud app deploy deploys on app engine, not on kubernetes (also it still requires a config file pointing to the image). D is not correct, since you cannot deploy a container image directly from GCS Reference - https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app Question: 13 CertyIQ Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do? A. Download and deploy the Jenkins Java WAR to App Engine Standard. B. Create a new Compute Engine instance and install Jenkins through the command line interface. C. Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. D. Use GCP Marketplace to launch the Jenkins solution. Answer: D Explanation: D. Use GCP Marketplace to launch the Jenkins solution. Reference: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine Question: 14 CertyIQ You need to update a deployment in Deployment Manager without any resource downtime in the deployment. Which command should you use? A. gcloud deployment-manager deployments create --config B. gcloud deployment-manager deployments update --config C. gcloud deployment-manager resources create --config D. gcloud deployment-manager resources update --config Answer: B Explanation: B is correct Additional tip, update and create resource is not even a command under deployment management service. Reference: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/update Question: 15 CertyIQ You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do? A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand. B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator. C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator. D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator. Answer: B Explanation: On-demand pricing Under on-demand pricing, BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read). You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. https://cloud.google.com/bigquery/pricing#on_demand_pricing ​Reference: https://cloud.google.com/bigquery/docs/estimate-costs Question: 16 CertyIQ You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally efficient and completed as quickly as possible. What should you do? A. Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application. B. Create an instance template, and use the template in a managed instance group with autoscaling configured. C. Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day. D. Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring. Answer: B Explanation: B. Create an instance template, and use the template in a managed instance group with autoscaling configured. Question: 17 CertyIQ You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax. What should you do? A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis. B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis. C. Export your transactions to a local file, and perform analysis with a desktop tool. D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis. Answer: D Explanation: Solving this by first eliminating the options that don't suit us. By breaking down the question into the key requirements- 1. Analyzing Google Cloud Platform service costs from three separate projects. 2. Using standard query syntax. -> (Relational data and SQL) A. 'Cloud Storage bucket'........'Cloud Bigtable'. Not feasible, mainly because cloud BigTable is not good for Structured Data (or Relational Data on which we can run SQL queries as per the question's requirements). BigTable is better suited for Semi Structured data and NoSQL data. B. 'Cloud Storage bucket'.....'Google Sheets'. Not Feasible because there is no use of SQL in this option, which is one of the requirements. C. Local file, external tools... this is automatically eliminated because the operation we need is simple, and there has to be a GCP native solution for this. We shouldn't need to rely on going out of the cloud for such a simple thing. ​D. 'BigQuery'.....'SQL queries' -> This is the right answer Question: 18 CertyIQ You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy? A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 " 90) B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days. C. Use gsutil rewrite and set the Delete action to 275 days (365-90). D. Use gsutil rewrite and set the Delete action to 365 days. Answer: B Explanation: You only re-calculate expiry date when objects are re-written using re-write option to another storage class in which case creation date is rest. But in this case objects is moveed to Coldline class after 90 days and then we want to delete the object after 365 days Question: 19 CertyIQ You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do? A. When creating the VM via the web console, specify the service account under the 'Identity and API Access' section. B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service- account. C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine- service-account. D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service- account.json. Answer: A Explanation: Changing the service account and access scopes for an instance If you want to run the VM as a different identity, or you determine that the instance needs a different set of scopes to call the required APIs, you can change the service account and the access scopes of an existing instance. For example, you can change access scopes to grant access to a new API, or change an instance so that it runs as a service account that you created, instead of the Compute Engine default service account. However, Google recommends that you use the fine-grained IAM policies instead of relying on access scopes to control resource access for the service account. To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance. Use one of the following methods to the change service account or access scopes of the stopped instance Reference: https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances Question: 20 CertyIQ You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do? A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists. B. Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance. C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in. D. Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in. Answer: B Explanation: B is correct. RDP is enabled by default when you crate a Windows instance (no need to check on it). Just make sure you install an RDP client ( chrome ext or RDP) and set windows password. Question: 21 CertyIQ You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do? A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances. B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances. C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances. D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances. Answer: A Explanation: Correct answer is A as you can create different configurations for each account and create compute instances in each account by activating the respective account.Refer GCP documentation - Configurations Create & Activate Options B, C & D are wrong as gcloud config configurations list does not help create instances. It would only lists existing named configurations Question: 22 CertyIQ You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do? A. Use granular logging statements within a Deployment Manager template authored in Python. B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console. C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures. D. Execute the Deployment Manager template using the "-preview option in the same project, and observe the state of interdependent resources. Answer: D Explanation: D is correct. Execute the Deployment Manager template using the "-preview option in the same project, and observe the state of interdependent resources. Reference: https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments Question: 23 CertyIQ You are building a pipeline to process time-series data. Which Google Cloud Platform services should you put in boxes 1,2,3, and 4? A. Cloud Pub/Sub, Cloud Dataflow, Cloud Datastore, BigQuery B. Firebase Messages, Cloud Pub/Sub, Cloud Spanner, BigQuery C. Cloud Pub/Sub, Cloud Storage, BigQuery, Cloud Bigtable D. Cloud Pub/Sub, Cloud Dataflow, Cloud Bigtable, BigQuery Answer: D Explanation: Cloud Pub/Sub, Cloud Dataflow, Cloud Bigtable, BigQuery Reference: https://cloud.google.com/solutions/correlating-time-series-dataflow Question: 24 CertyIQ You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment. What should you do? A. Use gcloud to create the new project, and then deploy your application to the new project. B. Use gcloud to create the new project and to copy the deployed application to the new project. C. Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project. D. Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project. Answer: A Explanation: Correct answer is A as gcloud can be used to create a new project and the gcloud app deploy can point to the new project.Refer GCP documentation - GCloud App Deploy. Option B is wrong as the option to use gcloud app cp does not exist. Option C is wrong as Deployment Manager does not copy the application, but allows you to specify all the resources needed for your application in a declarative format using yaml Option D is wrong as gcloud app deploy would not create a new project. The project should be created before usage Question: 25 CertyIQ You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google- recommended practices. What should you do? A. Add the auditors group to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. B. Add the auditors group to two new custom IAM roles. C. Add the auditor user accounts to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. D. Add the auditor user accounts to two new custom IAM roles. Answer: A Explanation: As per google best practices it is recommended to use predefined roles and create groups to control access to multiple users with same responsibility. Question: 26 CertyIQ You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices. What should you do? A. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/devstorage.write_only'. B. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/cloud-platform'. C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket. D. Create a service account and add it to the IAM role 'storage.objectAdmin' for that bucket. Answer: C Explanation: As per as the least privileage recommended by google, C is the correct Option, A is incorrect because the scope doesnt exist. B incorrect because it will give him full of control Question: 27 CertyIQ You have sensitive data stored in three Cloud Storage buckets and have enabled data access logging. You want to verify activities for a particular user for these buckets, using the fewest possible steps. You need to verify the addition of metadata labels and which files have been viewed from those buckets. What should you do? A. Using the GCP Console, filter the Activity log to view the information. B. Using the GCP Console, filter the Stackdriver log to view the information. C. View the bucket in the Storage section of the GCP Console. D. Create a trace in Stackdriver to view the information. Answer: B Explanation: B is correct. In this scenario, you need to select data access audit logs in Cloud Logging. Note the Stack driver Logging is now named Cloud Logging. Reference: https://cloud.google.com/storage/docs/audit-logging Question: 28 CertyIQ You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and files in Cloud Storage. You want to follow Google- recommended practices. Which IAM roles should you grant your colleagues? A. Project Editor B. Storage Admin C. Storage Object Admin D. Storage Object Creator Answer: B Explanation: Storage Admin (roles/storage.admin) Grants full control of buckets and objects. When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket. firebase.projects.get resourcemanager.projects.get resourcemanager.projects.list storage.buckets.* storage.objects Question: 29 CertyIQ You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do? A. Create a signed URL with a four-hour expiration and share the URL with the company. B. Set object access to 'public' and use object lifecycle management to remove the object after four hours. C. Configure the storage bucket as a static website and furnish the object's URL to the company. Delete the object from the storage bucket after four hours. D. Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed. Answer: A Explanation: Signed URLs are used to give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. https://cloud.google.com/storage/docs/access-control/signed-urls Question: 30 CertyIQ You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do? A. Deploy the monitoring pod in a StatefulSet object. B. Deploy the monitoring pod in a DaemonSet object. C. Reference the monitoring pod in a Deployment object. D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time. Answer: B Explanation: B is right Some typical uses of a Daemon Set are: running a cluster storage daemon on every node running a logs collection daemon on every node running a node monitoring daemon on every node. Reference: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Question: 31 CertyIQ You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do? A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console. B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. C. Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. D. Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/ Sub. Answer: A Explanation: Correct Answer is (A) Quickstart: using the Google Cloud Console This page shows you how to perform basic tasks in Pub/Sub using the Google Cloud Console. Note: If you are new to Pub/Sub, we recommend that you start with the interactive tutorial. Before you begin Set up a Cloud Console project. Set up a project Click to: Create or select a project. Enable the Pub/Sub API for that project. You can view and manage these resources at any time in the Cloud Console. Install and initialize the Cloud SDK. Note: You can run the gcloud tool in the Cloud Console without installing the Cloud SDK. To run the gcloud tool in the Cloud Console, use Cloud Shell. https://cloud.google.com/pubsub/docs/quickstart-console Question: 32 CertyIQ You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard. What should you do? A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects. B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects. C. Configure a single Stackdriver account, and link all projects to the same account. D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group. Answer: C Explanation: First of all D is incorrect, Groups are used to define alerts on set of resources(such as VM instances, databases, and load balancers). FYI tried adding Two projects into a group it did not allowed me as the "AND"/"OR" criteria for the group failed with this combination of resources. C is correct because, When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it was clicked. Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account). If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE. If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects. In both of these cases we did not create a GROUP, we just linked GCP Project to the workspace(stackdriver account) Question: 33 CertyIQ You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group? A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1. B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1. C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2. D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2. Answer: A Explanation: In my GCP console, I created a managed instance group for each answer. For each answer I deleted the instance that was created as a simple test to prove or disprove each answer. In answer A, another instance was created after I deleted the instance In answer B, no other instance was created after I deleted the instance In answer C, another instance was created after I deleted the instance In answer D, no other instance was created after I deleted the instance My observation is A is the correct Answer. A - Correct - It correctly solves the problem with only a single instance at one time B - Incorrect - Does not fit the requirement because AFTER the deletion of the instance, no other instance was created C - Incorrect - It creates another instance after the delete HOWEVER it 2 VM's could be created even if the target is exceeded D - Incorrect - Does not fit the requirement because AFTER the deletion of the instance, no other instance was created Question: 34 CertyIQ You want to verify the IAM users and roles assigned within a GCP project named my-project. What should you do? A. Run gcloud iam roles list. Review the output section. B. Run gcloud iam service-accounts list. Review the output section. C. Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. D. Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status. Answer: C Explanation: Correct answer is C as IAM section provides the list of both Members and Roles.Option A is wrong as it would provide information about the roles only.Option B is wrong as it would provide only the service accounts.Option D is wrong as it would provide information about the roles only Question: 35 CertyIQ You need to create a new billing account and then link it with an existing Google Cloud Platform project. What should you do? A. Verify that you are Project Billing Manager for the GCP project. Update the existing project to link it to the existing billing account. B. Verify that you are Project Billing Manager for the GCP project. Create a new billing account and link the new billing account to the existing project. C. Verify that you are Billing Administrator for the billing account. Create a new project and link the new project to the existing billing account. D. Verify that you are Billing Administrator for the billing account. Update the existing project to link it to the existing billing account. Answer: B Explanation: Answer is B. Billing Administrators can not create a new billing account, and the project is presumably already created. Project Billing Manager allows you to link the created billing account to the project. It is vague on how the billing account gets created but by process of elimination, I believe B to be the correct answer. Question: 36 CertyIQ You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm. What should you do? A. Download the private key from the service account, and add it to each VMs custom metadata. B. Download the private key from the service account, and add the private key to each VM's SSH keys. C. Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm. D. When creating the VMs, set the service account's API scope for Compute Engine to read/write. Answer: C Explanation: You create the service account in proj- sa and take note of the service account email, then you go to proj- vm in IAM > ADD and add the service account's email as new member and give it the Compute Storage Admin role. Reference: https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0 Question: 37 CertyIQ You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us- central region. Now you want the application to be served from the asia-northeast1 region. What should you do? A. Change the default region property setting in the existing GCP project to asia-northeast1. B. Change the region property setting in the existing App Engine application from us-central to asia-northeast1. C. Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application. D. Create a new GCP project and create an App Engine application inside this new project. Specify asia- northeast1 as the region to serve your application. Answer: D Explanation: D is correct, as there can be only one App Engine application inside a project. C is incorrect, as GCP can't have two app engine applications.. Also Two App engine can't be running on the same project: you can check this easy diagram for more info: https://cloud.google.com/appengine/docs/standard/an-overview-of-app- engine#components_of_an_application And you can't change location after setting it for your app Engine. https://cloud.google.com/appengine/docs/standard/locations Question: 38 CertyIQ You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance. What should you do? A. Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to the role. B. Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to a new group. Add the group to the role. C. Run gcloud iam roles describe roles/spanner.viewer - -project my-project. Add the users to the role. D. Run gcloud iam roles describe roles/spanner.viewer - -project my-project. Add the users to a new group. Add the group to the role. Answer: B Explanation: B is right. Using the g cloud tool, execute the g cloud i am roles describe roles/spanner. data base User command on Cloud Shell. Attach the users to a newly created Google group and add the group to the role. Question: 39 CertyIQ You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and stable version of Kubernetes. What should you do? A. Enable the Node Auto-Repair feature for your GKE cluster. B. Enable the Node Auto-Upgrades feature for your GKE cluster. C. Select the latest available cluster version for your GKE cluster. D. Select Container-Optimized OS (cos) as a node image for your GKE cluster. Answer: B Explanation: "Creating or upgrading a cluster by specifying the version as does not provide automatic upgrades. Enable automatic node upgrades to ensure that the nodes in your cluster up to date with the latest stable version." --source: https://cloud.google.com/kubernetes-engine/versioning-and-upgrades -Correct answer: B Question: 40 CertyIQ You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google- recommended practices. What should you do? A. Configure an HTTP(S) load balancer. B. Configure an internal TCP load balancer. C. Configure an external SSL proxy load balancer. D. Configure an external TCP proxy load balancer. Answer: A Explanation: According to the documentation of SSL Proxy Load Balacing on Google, "SSL Proxy Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend that you use HTTP(S) Load Balancing Reference: https://cloud.google.com/load-balancing/docs/https/ Question: 41 CertyIQ You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file? A. Use the GCP Console to transfer the file instead of gsutil. B. Enable parallel composite uploads using gsutil on the file transfer. C. Decrease the TCP window size on the machine initiating the transfer. D. Change the storage class of the bucket from Nearline to Multi-Regional. Answer: B Explanation: Correct answer is B as the bandwidth is good and its a single file, gsutil parallel composite uploads can be used to split the large file and upload in parallel.Refer GCP documentation - Transferring Data to GCP &amp Question: 42 CertyIQ You've deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below: You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do? A. Store the database password inside the Docker image of the container, not in the YAML file. B. Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret. C. Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap. D. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container. Answer: B Explanation: it is good practice to use Secrets for confidential data (like API keys) and ConfigMaps for non-confidential data (like port numbers). B is correct Question: 43 CertyIQ You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds. The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling. What should you do? A. Set the maximum number of instances to 1. B. Decrease the maximum number of instances to 3. C. Use a TCP health check instead of an HTTP health check. D. Increase the initial delay of the HTTP health check to 200 seconds. Answer: D Explanation: Ans is D The virtual machine instances take around three minutes to become available for users. Question: 44 CertyIQ You need to select and configure compute resources for a set of batch processing jobs. These jobs take around 2 hours to complete and are run nightly. You want to minimize service costs. What should you do? A. Select Google Kubernetes Engine. Use a single-node cluster with a small instance type. B. Select Google Kubernetes Engine. Use a three-node cluster with micro instance types. C. Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type. D. Select Compute Engine. Use VM instance types that support micro bursting. Answer: C Explanation: If your apps are fault-tolerant and can withstand possible instance preemptions, then preemptible instances can reduce your Compute Engine costs significantly. For example, batch processing jobs can run on preemptible instances. If some of those instances stop during processing, the job slows but does not completely stop. Preemptible instances complete your batch processing tasks without placing additional workload on your existing instances and without requiring you to pay full price for additional normal instances Question: 45 CertyIQ You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version of the application. What should you do? A. Run gcloud app restore. B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert. C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version. D. Deploy the original version as a separate application. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests. Answer: C Explanation: Option A is wrong as gcloud app restore was used for backup and restore and has been deprecated.Option B is wrong as there is no application revert functionality available.Option D is wrong as App Engine maintains version and need not be redeployed Question: 46 CertyIQ You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where the application deployed. What should you do? A. Check the app.yaml file for your application and check project settings. B. Check the web-application.xml file for your application and check project settings. C. Go to Deployment Manager and review settings for deployment of applications. D. Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment. Answer: D Explanation: D :Go to Cloud Shell and run g cloud config list to review the Google Cloud configuration used for deployment. D : as it would help to check the config details and Option A is not correct, as app.y aml would have only the runtime and script to run parameters and not the Project details Question: 47 CertyIQ You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do? A. Create an instance template for the instances. Set the 'Automatic Restart' to on. Set the 'On-host maintenance' to Migrate VM instance. Add the instance template to an instance group. B. Create an instance template for the instances. Set 'Automatic Restart' to off. Set 'On-host maintenance' to Terminate VM instances. Add the instance template to an instance group. C. Create an instance group for the instances. Set the 'Autohealing' health check to healthy (HTTP). D. Create an instance group for the instance. Verify that the 'Advanced creation options' setting for 'do not retry machine creation' is set to off. Answer: A Explanation: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options on Host Maintenance: Determines the behaviour when a maintenance event occurs that might cause your instance to reboot. [Default] MIGRATE, which causes Compute Engine to live migrate an instance when there is a maintenance event. TERMINATE, which stops an instance instead of migrating it. automatic Restart: Determines the behaviour when an instance crashes or is stopped by the system. [Default] true, so Compute Engine restarts an instance if the instance crashes or is stopped. false, so Compute Engine does not restart an instance if the instance crashes or is stopped Question: 48 CertyIQ You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally. What should you do? A. Enable Cloud CDN on the website frontend. B. Enable 'Share publicly' on the PDF file objects. C. Set Content-Type metadata to application/pdf on the PDF file objects. D. Add a label to the storage bucket with a key of Content-Type and value of application/pdf. Answer: C Explanation: C - Set Content-Type metadata to application/pdf on the PDF file objects Question: 49 CertyIQ You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What should you do? A. Rely on live migration to move the workload to a machine with more memory. B. Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB. C. Stop the VM, change the machine type to n1-standard-8, and start the VM. D. Stop the VM, increase the memory to 8 GB, and start the VM. Answer: D Explanation: D. Stop the VM, increase the memory to 8 GB, and start the VM. Question: 50 CertyIQ You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over Internal IP without creating additional routes. You need to set up VPC and the 2 subnets. Which configuration meets these requirements? A. Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range. B. Create a single custom VPC with 2 subnets. Create each subnet in the same region and with the same CIDR range. C. Create 2 custom VPCs, each with a single subnet. Create each subnet in a different region and with a different CIDR range. D. Create 2 custom VPCs, each with a single subnet. Create each subnet in the same region and with the same CIDR range. Answer: A Explanation: Different regions is something odd, but the main reason why its A is cause the CIDR range. CIDR is the short for Classless Inter-Domain Routing. So, if we have 2 subnets, they CAN NOT BE the use the same CIDR. IPv4 subnet ranges "Each primary or secondary IPv4 range for all subnets in a VPC network must be a unique valid CIDR block. Refer to the per network limits for the number of secondary IP ranges you can define." https://cloud.google.com/vpc/docs/vpc Question: 51 CertyIQ You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do? A. Create a health check on port 443 and use that when creating the Managed Instance Group. B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group. C. In the Instance Template, add the label 'health-check'. D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server. Answer: A Explanation: A. Create a health check on port 443 and use that when creating the Managed Instance Group Question: 52 CertyIQ Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices. What should you do? A. 1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery jobUser role to the group. B. 1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery dataViewer user role to the group. C. 1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery jobUser role to the group. D. 1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery dataViewer user role to the group. Answer: C Explanation: C is correct, doc's said: When applied to a dataset, data Viewer provides permissions to: Read the dataset's metadata and to list tables in the dataset. Read data and metadata from the dataset's tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs. Question: 53 CertyIQ Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below. Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows: * Instances in tier #1 must communicate with tier #2. * Instances in tier #2 must communicate with tier #3. What should you do? A. 1. Create an ingress firewall rule with the following settings: ¢ Targets: all instances ¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ¢ Targets: all instances ¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ¢ Protocols: allow all B. 1. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #2 service account ¢ Source filter: all instances with tier #1 service account ¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #3 service account ¢ Source filter: all instances with tier #2 service account ¢ Protocols: allow TCP: 8080 C. 1. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #2 service account ¢ Source filter: all instances with tier #1 service account ¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #3 service account ¢ Source filter: all instances with tier #2 service account ¢ Protocols: allow all D. 1. Create an egress firewall rule with the following settings: ¢ Targets: all instances ¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ¢ Protocols: allow TCP: 8080 2. Create an egress firewall rule with the following settings: ¢ Targets: all instances ¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ¢ Protocols: allow TCP: 8080 Answer: B Explanation: B is correct. 1. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #2 service account ¢ Source filter: all instances with tier #1 service account ¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: ¢ Targets: all instances with tier #3 service account ¢ Source filter: all instances with tier #2 service account ¢ Protocols: allow TCP: 8080 Question: 54 CertyIQ You are given a project with a single Virtual Private Cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in this subnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do? A. 1. Create a subnetwork in the same VPC, in europe-west1. 2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. B. 1. Create a VPC and a subnetwork in europe-west1. 2. Expose the application with an internal load balancer. 3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint. C. 1. Create a subnetwork in the same VPC, in europe-west1. 2. Use Cloud VPN to connect the two subnetworks. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. D. 1. Create a VPC and a subnetwork in europe-west1. 2. Peer the 2 VPCs. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. Answer: A Explanation: A is correct. 1. Create a subnetwork in the same VPC, in europe-west1. 2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. Question: 55 CertyIQ Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do? A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource. B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource. C. 1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Logging. D. 1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Monitoring. Answer: A Explanation: https://cloud.google.com/logging/docs/api/v2/resource-list GKE Containers have more log than GKE Cluster Operations:.-GKE Containe: cluster_name: An immutable name for the cluster the container is running in. namespace_id: Immutable ID of the cluster namespace the container is running in. instance_id: Immutable ID of the GCE instance the container is running in. pod_id: Immutable ID of the pod the container is running in. container_name: Immutable name of the container. zone: The GCE zone in which the instance is running. VS.-GKE Cluster Operations project_id: The identifier of the GCP project associated with this resource, such as "my-project". cluster_name: The name of the GKE Cluster. location: The location in which the GKE Cluster is running Question: 56 CertyIQ You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity. What should you do? A. Deploy the new version in the same application and use the --migrate option. B. Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version. C. Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version. D. Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application. Answer: B Explanation: B is a answer. a: --migrate is for enabling gradual traffic migration as opposed to migrating traffic immediately c & d: no need to create a project. You can split the traffic any time Question: 57 CertyIQ You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do? A. Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1. B. Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0. C. Create a new managed instance group with an updated instance template. Add the group to the backend service for the load balancer. When all instances in the new managed instance group are healthy, delete the old managed instance group. D. Create a new instance template with the new application version. Update the existing managed instance group with the new instance template. Delete the instances in the managed instance group to allow the managed instance group to recreate the instance using the new instance template. Answer: B Explanation: Correct option is B. We need to ensure the global capacity remains intact, for that reason we need to establish maxUnavailable to 0. On the other hand, we need to ensure new instances can be created. We do that by establishing the maxSurge to 1. Option C is more expensive and more difficult to set up and option D won't meet requirements since it won't keep global capacity intact Question: 58 CertyIQ You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum configuration changes. Which storage solution should you use? A. Cloud SQL B. Cloud Spanner C. Cloud Firestore D. Cloud Datastore Answer: B Explanation: Cloud SQL for small relational data, scaled manually Cloud Spanner for relational data, scaled automatically Cloud Firestore for app-based data(?) Cloud Datastore for non-relational data Question: 59 CertyIQ You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects. What should you do? A. Assign the finance team only the Billing Account User role on the billing account. B. Assign the engineering team only the Billing Account User role on the billing account. C. Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization. D. Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization. Answer: A Explanation: A is correct. They want the finance team to only be able to "link a project to a billing account" - nothing more. According to https://cloud.google.com/billing/docs/how-to/billing-access#overview_of_billing_roles_in a Project Billing Manager is able to "Link/unlink the project to/from a billing account.". But the question specifically asks that the finance team "should not be able to make any other changes to projects." so D. is not the right answer because we do not want them to also be able to unlink. Question: 60 CertyIQ You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do? A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Set the service's externalTrafficPolicy to Cluster. 3. Configure the Compute Engine instance to use the address of the load balancer that has been created. B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend. 2. Create a Compute Engine instance called proxy with 2 network interfaces, one in each VPC. 3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes. 4. Configure the Compute Engine instance to use the address of proxy in gce-network as endpoint. C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Add an annotation to this service: cloud.google.com/load-balancer-type: Internal 3. Peer the two VPCs together. 4. Configure the Compute Engine instance to use the address of the load balancer that has been created. D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Add a Cloud Armor Security Policy to the load balancer that whitelists the internal IPs of the MIG's instances. 3. Configure the Compute Engine instance to use the address of the load balancer that has been created. Answer: C Explanation: 1. C is the answer: https://cloud.google.com/load-balancing/docs/choosing-load-balancer#external-internal 2. [C]"no overlapping IP's" so VPC peering will work. However one will need to configure firewall on both VPC's to allow internal traffic. Question: 61 CertyIQ Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do? A. Create an export to the sink that saves logs from Cloud Audit to BigQuery. B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket. C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery. D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL. Answer: B Explanation: Option B because it talks about cost effective solution, I know BQ has the same cost as Coldline in GCS if data is kept for 90 days but in Cloud Storage we can save more by further moving the class to Archival which is cheaper than Coldline. SO DEFINATELY IT'S OPTION B Question: 62 CertyIQ You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy? A. Create a Cloud Memorystore for Redis instance with 32-GB capacity. B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory. C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes. D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB. Answer: A Explanation: Go to cloud console and create instance select Memorystore with Basic tier, select us-central1 and us- central1-a, and capacity 32GB, the cost estimate is $0.023/GB/hr select VM instance with custom machine type with 6 vCPUs and 32 GB memory, the same region and zone as Memorystore setting, the cost estimate is $0.239/hr Option B will definitely cost more as it adds on CPU usage cost even it uses little in this scenario, but still charge you. So answer is A from real practice example Question: 63 CertyIQ You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the application with access to Cloud Storage. What should you do? A. 1. Use nslookup to get the IP address for storage.googleapis.com. 2. Negotiate with the security team to be able to give a public IP address to the servers. 3. Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com. B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud. 2. In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance. 3. Configure your servers to use that instance as a proxy to access Cloud Storage. C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine. 2. Create an internal load balancer (ILB) that uses storage.googleapis.com as backend. 3. Configure your new instances to use this ILB as proxy. D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in Google Cloud. 2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel. 3. In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com. Answer: D Explanation: correct answer is D. why will cx migrate it env. to GCP. easiest and faster approach is to have Cloud VPN setup and advertise route o cloud router Question: 64 CertyIQ You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do? A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic. 2. Call your application on Cloud Run from the Cloud Function for every message. B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run. 2. Create a Cloud Pub/Sub subscription for that topic. 3. Make your application pull messages from that subscription. C. 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint. D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal. 2. Create a Cloud Pub/Sub subscription for that topic. 3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application. Answer: C Explanation: C. is the correct answer: Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint Question: 65 CertyIQ You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do? A. Deploy the container on Cloud Run. B. Deploy the container on Cloud Run on GKE. C. Deploy the container on App Engine Flexible. D. Deploy the container on GKE with cluster autoscaling and horizontal pod autoscaling enabled. Answer: A Explanation: Correct Answer be A Cloud Run takes any container images and pairs great with the container ecosystem: Cloud Build, Artifact Registry, Docker.... No infrastructure to manage: once deployed, Cloud Run manages your services so you can sleep well. Fast autoscaling. Cloud Run automatically scales up or down from zero to N depending on traffic. https://cloud.google.com/run Question: 66 CertyIQ Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do? A. Link the acquired company's projects to your company's billing account. B. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset. C. Migrate the acquired company's projects into your company's GCP organization. Link the migrated projects to your company's billing account. D. Create a new GCP organization and a new billing account. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account. Answer: A Explanation: A looks correct. projects are linked to another organization as well in the acquired company so migrating would need google cloud support. we can not do ourselves. however, we can link other company projects to an existing billing account to generate total cost. https://medium.com/google-cloud/google-cloud-platform-cross-org-billing-41c5db8fefa6 Question: 67 CertyIQ You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do? A. Add the support team group to the roles/monitoring.viewer role B. Add the support team group to the roles/spanner.databaseUser role. C. Add the support team group to the roles/spanner.databaseReader role. D. Add the support team group to the roles/stackdriver.accounts.viewer role. Answer: A Explanation: A, right, correct answer. B and C are incorrect because allow to read data. D also incorrect: Not for monitoring. roles/stackdriver.accounts.viewer Stackdriver Accounts Viewer: Read-only access to get and list information about Stackdriver account structure (resourcemanager.projects.get, resourcemanager.projects.list and stackdriver.projects.get) Additional Explanation, Answer A, adding the support team group to the roles/monitoring.viewer role, is the CORRECT answer. This role grants read-only access to monitoring data for all resources in a project, which allows the support team to monitor the environment but not access the table data. Answer B, adding the support team group to the roles/spanner.databaseUser role, grants read and write access to all tables in the specified database, which is NOT required for the support team to monitor the environment. Answer C, adding the support team group to the roles/spanner.databaseReader role, grants read-only access to all tables in the specified database, which would give the support team access to the table data. Answer D, adding the support team group to the roles/stackdriver.accounts.viewer role, grants permissions to view Stackdriver data for all resources in a project, which is NOT directly related to monitoring the Cloud Spanner environment. Question: 68 CertyIQ For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Cloud Logging agent on all the instances. You want to minimize cost. What should you do? A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances. 2. Update your instances' metadata to add the following value: logs-destination: bq://platform-logs. B. 1. In Cloud Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink. 2. Create a Cloud Function that is triggered by messages in the logs topic. 3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset. C. 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination. D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset. 2. Configure this Cloud Function to create a BigQuery Job that executes this query: INSERT INTO dataset.platform-logs (timestamp, log) SELECT timestamp, log FROM compute.logs WHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) 3. Use Cloud Scheduler to trigger this Cloud Function once a day. Answer: C Explanation: C is correct. 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose Big Query as Sink Service, and the platform-logs dataset as Sink Destination. Reference: https://cloud.google.com/logging/docs/export/configure_export_v2 Question: 69 CertyIQ You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do? A. Add the cluster's API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet. B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition. C. With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet. D. In the cluster's definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value. Answer: A Explanation: Correct Answer is (A) Adding an API as a type provider This page describes how to add an API to Google Cloud Deployment Manager as a type provider. To learn more about types and type providers, read the Types overview documentation. A type provider exposes all of the resources of a third-party API to Deployment Manager as base types that you can use in your configurations. These types must be directly served by a RESTful API that supports Create, Read, Update, and Delete (CRUD). If you want to use an API that is not automatically provided by Google with Deployment Manager, you must add the API as a type provider. https://cloud.google.com/deployment-manager/docs/configuration/type-providers/creating-type-provider Question: 70 CertyIQ You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do? A. Use service account credentials in your on-premises application. B. Use gcloud to create a key file for the service account that has appropriate permissions. C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications. D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center. Answer: B Explanation: To use a service account outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. https://cloud.google.com/iam/docs/creating-managing-service-account-keys Question: 71 CertyIQ You are using Container Registry to centrally store your company's container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do? A. In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes. B. When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under 'Access scopes'. C. Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes. D. Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. Answer: A Explanation: IAM permissions IAM permissions determine who can access resources. All users, service accounts, and other identities that interact with Container Registry must have the appropriate Cloud Storage permissions. By default, Google Cloud use default service accounts to interact with resources within the same project. For example, the Cloud Build service account can both push and pull images when Container Registry is in the same project. You must configure or modify permissions yourself if: You are using a service account in one project to access Container Registry in a different project You are using a default service account with read- only access to storage, but you want to both pull and push images You are using a custom service account to interact with Container Registry https://cloud.google.com/container-registry/docs/access-control Question: 72 CertyIQ You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below. You check the status of the deployed pods and notice that one of them is still in PENDING status: You want to find out why the pod is stuck in pending status. What should you do? A. Review details of the myapp-service Service object and check for error messages. B. Review details of the myapp-deployment Deployment object and check for error messages. C. Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages. D. View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages. Answer: C Explanation: C is the correct answer. If a Pod is stuck in Pending it means that it can not be scheduled onto a node. Generally this is because there are insufficient resources of one type or another that prevent scheduling. Look at the output of the kubectl describe... command above. There should be messages from the scheduler about why it can not schedule your Pod Reference: https://cloud.google.com/run/docs/gke/troubleshooting Question: 73 CertyIQ You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do? A. After the VM has been created, use your Google Account credentials to log in into the VM. B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM. C. When creating the VM, add metadata to the instance using 'windows-password' as the key and a password as the value. D. After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM. Answer: B Explanation: B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM. https://cloud.google.com/sdk/gcloud/reference/beta/compute/reset-windows- password Question: 74 CertyIQ You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular Google Cloud Platform project that the dev1 users should be able to connect to. What should you do? A. Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to ssh to that instance. B. Set metadata to enable-oslogin=true for the instance. Set the service account to no service account for that instance. Direct them to use the Cloud Shell to ssh to that instance. C. Enable block project wide keys for the instance. Generate an SSH key for each user in the dev1 group. Distribute the keys to dev1 users and direct them to use their third-party tools to connect. D. Enable block project wide keys for the instance. Generate an SSH key and associate the key with that instance. Distribute the key to dev1 users and direct them to use their third-party tools to connect. Answer: A Explanation: A is correct. You can grant roles/compute.osLogin instance access roles at the instance level by using the gcloud compute instances add-iam-policy-binding command. https://cloud.google.com/compute/docs/instances/managing-instance-access#grant-iam-roles Question: 75 CertyIQ You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is my-project. What should you do? A. Run gcloud projects list to get the project ID, and then run gcloud services list --project. B. Run gcloud init to set the current project to my-project, and then run gcloud services list --available. C. Run gcloud info to view the account value, and then run gcloud services list --account. D. Run gcloud projects describe to verify the project value, and then run gcloud services list -- available. Answer: A Explanation: "A" is correct. For those, who have doubts: `g cloud services list --available` returns not only the enabled services in the project but also services that CAN be enabled. Therefore, option B is incorrect. https://cloud.google.com/sdk/gcloud/reference/services/list#--available Question: 76 CertyIQ You are building a new version of an application hosted in an App Engine environment. You want to test the new version with 1% of users before you completely switch your application over to the new version. What should you do? A. Deploy a new version of your application in Google Kubernetes Engine instead of App Engine and then use GCP Console to split traffic. B. Deploy a new version of your application in a Compute Engine instance instead of App Engine and then use GCP Console to split traffic. C. Deploy a new version as a separate app in App Engine. Then configure App Engine using GCP Console to split traffic between the two apps. D. Deploy a new version of your application in App Engine. Then go to App Engine settings in GCP Console and split traffic between the current version and newly deployed versions accordingly. Answer: D Explanation: D - Deploy a new version of your application in App Engine. Then go to App Engine settings in GCP Console and split traffic between the current version and newly deployed versions accordingly Question: 77 CertyIQ You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days. What should you do next? A. Fill in local SSD. Fill in persistent disk storage and snapshot storage. B. Fill in local SSD. Add estimated cost for cluster management. C. Select Add GPUs. Fill in persistent disk storage and snapshot storage. D. Select Add GPUs. Add estimated cost for cluster management. Answer: A Explanation: A is correct. For high IOPS use SSD. And there is no need to include cluster management fees as stated in B because it is already included in the cost. Question: 78 CertyIQ You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS on a public IP address. What should you do? A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer. B. Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service. C. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluster. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing. D. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application. Forward the public traffic to HAProxy with an iptable rule. Configure the DNS name of your application using the public IP of the node HAProxy is running on. Answer: A Explanation: option B - WRONG --> cluster IP is an internal IP, so we cannt expose publically. option C - WRONG-->port 443 is HTTPS but public DNS is not provide load balancing. option D - WRONG -->HAProxy is HTTP only not HTTPS so A is right option. Question: 79 CertyIQ You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC. What should you do? A. Verify that both projects are in a GCP Organization. Create a new VPC and add all instances. B. Verify that both projects are in a

Use Quizgecko on...
Browser
Browser