[PDF and VCE] Latest DP-420 Exam Practice Materials Free Downloading
Attention please! Here is the shortcut to pass your DP-420 exam! Get yourself well prepared for the Microsoft Certified: Azure Cosmos DB Developer Specialty DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB exam is really a hard job. But don’t worry! We We, provides the most update DP-420 new questions. With We latest DP-420 vce, you’ll pass the Microsoft Certified: Azure Cosmos DB Developer Specialty DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB exam in an easy way
Visit our site to get more DP-420 Q and As:https://www.itexamfun.com/dp-420.html (51 QAs Dumps)
Question 1:
You are troubleshooting the current issues caused by the application updates.
Which action can address the application updates issue without affecting the functionality of the application?
A. Enable time to live for the con-product container.
B. Set the default consistency level of account1 to strong.
C. Set the default consistency level of account1 to bounded staleness.
D. Add a custom indexing policy to the con-product container.
Correct Answer: C
Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc.
Scenario: Application updates in con-product frequently cause HTTP status code 429 “Too many requests”. You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Question 2:
You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements. What should you select?
A. the timestamp
B. the humidity
C. the temperature
D. the device ID
Correct Answer: D
The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to make sense in the context of your specific scenario. The IoT Device ID is generally the “natural” partition key for IoT applications.
Scenario: The iotdb database will contain two containers named con-iot1 and con-iot2. Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
Reference: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db
Question 3:
You configure multi-region writes for account1.
You need to ensure that App1 supports the new configuration for account1. The solution must meet the business requirements and the product catalog requirements.
What should you do?
A. Set the default consistency level of account1 to bounded staleness.
B. Create a private endpoint connection.
C. Modify the connection policy of App1.
D. Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor containers.
Correct Answer: D
App1 queries the con-product and con-productVendor containers.
Note: Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
Scenario:
Develop an app named App1 that will run from all locations and query the data in account1.
Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.
Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.
Incorrect Answers:
A:
Bounded staleness relates to writes. App1 only do reads.
Note: Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Question 4:
You need to provide a solution for the Azure Functions notifications following updates to con-product. The solution must meet the business requirements and the product catalog requirements. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. Configure the trigger for each function to use a different leaseCollectionPrefix
B. Configure the trigger for each function to use the same leaseCollectionName
C. Configure the trigger for each function to use a different leaseCollectionName
D. Configure the trigger for each function to use the same leaseCollectionPrefix
Correct Answer: AB
leaseCollectionPrefix: when set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes.
Scenario: Use Azure Functions to send notifications about product updates to different recipients.
Trigger the execution of two Azure functions following every update to any document in the con-product container.
Reference:
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger
Question 5:
You need to identify which connectivity mode to use when implementing App2. The solution must support the planned changes and meet the business requirements. Which connectivity mode should you identify?
A. Direct mode over HTTPS
B. Gateway mode (using HTTPS)
C. Direct mode over TCP
Correct Answer: C
Scenario: Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.
By using Azure Private Link, you can connect to an Azure Cosmos account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network.
When you\’re using Private Link with an Azure Cosmos account through a direct mode connection, you can use only the TCP protocol. The HTTP protocol is not currently supported.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints
Question 6:
You have an application named App1 that reads the data in an Azure Cosmos DB Core (SQL) API account. App1 runs the same read queries every minute. The default consistency level for the account is set to eventual.
You discover that every query consumes request units (RUs) instead of using the cache.
You verify the IntegratedCacheiteItemHitRatemetric and the IntegratedCacheQueryHitRatemetric. Both metrics have values of 0.
You verify that the dedicated gateway cluster is provisioned and used in the connection string.
You need to ensure that App1 uses the Azure Cosmos DB integrated cache.
What should you configure?
A. the indexing policy of the Azure Cosmos DB container
B. the consistency level of the requests from App1
C. the connectivity mode of the App1 CosmosClient
D. the default consistency level of the Azure Cosmos DB account
Correct Answer: C
Because the integrated cache is specific to your Azure Cosmos DB account and requires significant CPU and memory, it requires a dedicated gateway node. Connect to Azure Cosmos DB using gateway mode.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/integrated-cache-faq
Question 7:
You are developing an application that will use an Azure Cosmos DB Core (SQL) API account as a data source. You need to create a report that displays the top five most ordered fruits as shown in the following table.
A collection that contains aggregated data already exists. The following is a sample document:
{
“name”: “apple”,
“type”: [“fruit”, “exotic”],
“orders”: 10000
}
Which two queries can you use to retrieve data for the report? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Option A
B. Option B
C. Option C
D. Option D
Correct Answer: BD
ARRAY_CONTAINS returns a Boolean indicating whether the array contains the specified value. You can check for a partial or full match of an object by using a boolean expression within the command. Incorrect Answers:
A: Default sorting ordering is Ascending. Must use Descending order.
C: Order on Orders not on Type.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-array-contains
Question 8:
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from the devices will be occur every second. The following is a sample of the data.
You need to select a partition key that meets the following requirements for writes:
1.
Minimizes the partition skew
2.
Avoids capacity limits
3.
Avoids hot partitions What should you do?
A. Use timestampas the partition key.
B. Create a new synthetic key that contains deviceIdand sensor1Value.
C. Create a new synthetic key that contains deviceIdand deviceManufacturer.
D. Create a new synthetic key that contains deviceIdand a random number.
Correct Answer: D
Use a partition key with a random suffix. Distribute the workload more evenly is to append a random number at the end of the partition key value. When you distribute items in this way, you can perform parallel write operations across partitions.
Incorrect Answers:
A: You will also not like to partition the data on “DateTime”, because this will create a hot partition. Imagine you have partitioned the data on time, then for a given minute, all the calls will hit one partition. If you need to retrieve the data for a customer, then it will be a fan-out query because data may be distributed on all the partitions.
B: Senser1Value has only two values.
C: All the devices could have the same manufacturer.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/synthetic-partition-keys
Question 9:
You maintain a relational database for a book publisher. The database contains the following tables.
The most common query lists the books for a given authorId.
You need to develop a non-relational data model for Azure Cosmos DB Core (SQL) API that will replace the relational database. The solution must minimize latency and read operation costs.
What should you include in the solution?
A. Create a container for Author and a container for Book. In each Author document, embed bookId for each book by the author. In each Book document embed authorIdof each author.
B. Create Author, Book, and Bookauthorlnk documents in the same container.
C. Create a container that contains a document for each Author and a document for each Book. In each Book document, embed authorId.
D. Create a container for Author and a container for Book. In each Author document and Book document embed the data from Bookauthorlnk.
Correct Answer: A
Store multiple entity types in the same container.
Question 10:
You have an Azure Cosmos DB Core (SQL) API account.
You run the following query against a container in the account.
SELECT
IS_NUMBER(“1234”) AS A,
IS_NUMBER(1234) AS B,
IS_NUMBER({prop: 1234}) AS C
What is the output of the query?
A. [{“A”: false, “B”: true, “C”: false}]
B. [{“A”: true, “B”: false, “C”: true}]
C. [{“A”: true, “B”: true, “C”: false}]
D. [{“A”: true, “B”: true, “C”: true}]
Correct Answer: A
IS_NUMBER returns a Boolean value indicating if the type of the specified expression is a number. “1234” is a string, not a number.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-is-number
Question 11:
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container. Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Correct Answer: C
C: When triggers are registered, you can specify the operations that it can run with.
F: When executing, pre-triggers are passed in the RequestOptions object by specifying PreTriggerInclude and then passing the name of the trigger in a List object.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs
Question 12:
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.
Solution: You configure an Azure Monitor alert to trigger the function.
Does this meet the goal?
A. Yes
B. No
Correct Answer: A
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Note: Alerts are used to set up recurring tests to monitor the availability and responsiveness of your Azure Cosmos DB resources. Alerts can send you a notification in the form of an email, or execute an Azure Function when one of your metrics reaches the threshold or if a specific event is logged in the activity log.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 13:
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.
Solution: You configure the function to have an Azure CosmosDB trigger.
Does this meet the goal?
A. Yes
B. No
Correct Answer: B
Instead configure an Azure Monitor alert to trigger the function.
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 14:
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.
Solution: You configure an application to use the change feed processor to read the change feed and you configure the application to trigger the function.
Does this meet the goal?
A. Yes
B. No
Correct Answer: B
Instead configure an Azure Monitor alert to trigger the function.
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 15:
You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector”
B. “key.converter”: “org.apache.kafka.connect.json.JsonConverter”
C. “key.converter”: “io.confluent.connect.avro.AvroConverter”
D. “connect.cosmos.containers.topicmap”: “iot#telemetry”
E. “connect.cosmos.containers.topicmap”: “iot”
F. “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector”
Correct Answer: CDF
C: Avro is binary format, while JSON is text.
F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The
connector polls data from Kafka to write to containers in the database based on the topics subscription.
D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector.
Extract:
“connector.class”: “com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector”,
“key.converter”: “org.apache.kafka.connect.json.AvroConverter”
“connect.cosmos.containers.topicmap”: “hotels#kafka”
Incorrect Answers:
B: JSON is plain text.
Note, full example:
{ “name”: “cosmosdb-sink-connector”, “config”: {
“connector.class”: “com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector”,
“tasks.max”: “1”,
“topics”: [
“hotels”
],
“value.converter”: “org.apache.kafka.connect.json.AvroConverter”,
“value.converter.schemas.enable”: “false”,
“key.converter”: “org.apache.kafka.connect.json.AvroConverter”,
“key.converter.schemas.enable”: “false”,
“connect.cosmos.connection.endpoint”: “https://.documents.azure.com:443/”,
“connect.cosmos.master.key”: “”,
“connect.cosmos.databasename”: “kafkaconnect”,
“connect.cosmos.containers.topicmap”: “hotels#kafka”
} }
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink
https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/
Visit our site to get more DP-420 Q and As:https://www.itexamfun.com/dp-420.html (51 QAs Dumps)