An application updates an inventory running only one process at any given time to keep the inventory consistent. This process takes 200 milliseconds (.2 seconds) to execute; therefore, the scalability threshold of the application is five requests per second. What is the impact on the application if horizontal scaling is applied, thereby increasing the number of Mule workers?
A. The application scalability threshold is five requests per second regardless of the horizontal scaling
B. The total process execution time is now 100 milliseconds (.1 seconds)
C. The application scalability threshold is now 10 requests per second
D. Horizontal scaling cannot be applied to an already-running application
Explanation:
Given that the application is designed to handle only one process at a time
to maintain data consistency, here’s why horizontal scaling won’t increase the
processing limit:
Single-Process Constraint:
How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform ratelimit-
enforcement securityScheme with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
Explanation: Explanation
Correct Answer: By refining the response definitions by adding the x-ratelimit-* response
headers with description, type, and example
*****************************************
A Rate Limiting policy is applied to an API implementation to protect the back-end system. Recently, there have been surges in demand that cause some API client POST requests to the API implementation to be rejected with policy-related errors, causing delays and complications to the API clients. How should the API policies that are applied to the API implementation be changed to reduce the frequency of errors returned to API clients, while still protecting the back-end system?
A. Keep the Rate Limiting policy and add 9 Client ID Enforcement policy
B. Remove the Rate Limiting policy and add an HTTP Caching policy
C. Remove the Rate Limiting policy and add a Spike Control policy
D. Keep the Rate Limiting policy and add an SLA-based Spike Control policy
Explanation:
When managing high traffic to an API, especially with POST requests, it is
crucial to ensure the API’s policies both protect the back-end systems and provide a
smooth client experience. Here’s the approach to reducing errors:
Rate Limiting Policy: This policy enforces a limit on the number of requests within
a defined time period. However, rate limiting alone may cause clients to hit limits
during demand surges, leading to errors.
An organization is implementing a Quote of the Day API that caches today's quote.
What scenario can use the GoudHub Object Store via the Object Store connector to persist
the cache's state?
A.
When there are three CloudHub deployments of the API implementation to three
separate CloudHub regions that must share the cache state
B.
When there are two CloudHub deployments of the API implementation by two Anypoint
Platform business groups to the same CloudHub region that must share the cache state
C.
When there is one deployment of the API implementation to CloudHub and anottV
deployment to a customer-hosted Mule runtime that must share the cache state
D.
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
Explanation: Explanation
Correct Answer: When there is one CloudHub deployment of the API implementation to
three CloudHub workers that must share the cache state.
*****************************************
Key details in the scenario:
>> Use the CloudHub Object Store via the Object Store connector
Considering above details:
>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.
>> We CANNOT use an application's CloudHub Object Store to be shared among multiple
Mule applications running in different Regions or Business Groups or Customer-hosted
Mule Runtimes by using Object Store connector.
>> If it is really necessary and very badly needed, then Anypoint Platform supports a way
by allowing access to CloudHub Object Store of another application using Object Store
REST API. But NOT using Object Store connector.
So, the only scenario where we can use the CloudHub Object Store via the Object Store
connector to persist the cache’s state is when there is one CloudHub deployment of the
API implementation to multiple CloudHub workers that must share the cache state
What is a key performance indicator (KPI) that measures the success of a typical C4E that is immediately apparent in responses from the Anypoint Platform APIs?
A.
The number of production outage incidents reported in the last 24 hours
B.
The number of API implementations that have a publicly accessible HTTP endpoint and are being managed by Anypoint Platform
C.
The fraction of API implementations deployed manually relative to those deployed using a CI/CD tool
D.
The number of API specifications in RAML or OAS format published to Anypoint
Exchange
The number of API specifications in RAML or OAS format published to Anypoint
Exchange
Explanation: Explanation
Correct Answer: The number of API specifications in RAML or OAS format published to
Anypoint Exchange
*****************************************
>> The success of C4E always depends on their contribution to the number of reusable
assets that they have helped to build and publish to Anypoint Exchange.
>> It is NOT due to any factors w.r.t # of outages, Manual vs CI/CD deployments or
Publicly accessible HTTP endpoints
>> Anypoint Platform APIs helps us to quickly run and get the number of published
RAML/OAS assets to Anypoint Exchange. This clearly depicts how successful a C4E team
is based on number of returned assets in the response.
Reference: https://help.mulesoft.com/s/question/0D52T00004mXSTUSA4/how-should-acompany-
measure-c4e-success
An API implementation is deployed to CloudHub. What conditions can be alerted on using the default Anypoint Platform functionality, where the alert conditions depend on the API invocations to an API implementation?
A. When the API invocations are sent directly to the internal DNS record of the API implementation
B. When the API invocations are not over-a- secure TLS/SSL communication channel
C. When the APL invecations originate from a geography different than the API
D. When the number of API invocations are below a threshold
Say, there is a legacy CRM system called CRM-Z which is offering below functions:
1. Customer creation
2. Amend details of an existing customer
3. Retrieve details of a customer
4. Suspend a customer
A.
Implement a system API named customerManagement which has all the functionalities
wrapped in it as various operations/resources
B.
Implement different system APIs named createCustomer, amendCustomer,
retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
C.
Implement different system APIs named createCustomerInCRMZ,
amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ as
they are modular and has seperation of concerns
Implement different system APIs named createCustomer, amendCustomer,
retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
Correct Answer: Implement different system APIs named createCustomer,
amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has
seperation of concerns
*****************************************
>> It is quite normal to have a single API and different Verb + Resource combinations.
However, this fits well for an Experience API or a Process API but not a best architecture
style for System APIs. So, option with just one customerManagement API is not the best
choice here.
>> The option with APIs in createCustomerInCRMZ format is next close choice w.r.t
modularization and less maintenance but the naming of APIs is directly coupled with the
legacy system. A better foreseen approach would be to name your APIs by abstracting the
backend system names as it allows seamless replacement/migration of any backend
system anytime. So, this is not the correct choice too.
>> createCustomer, amendCustomer, retrieveCustomer and suspendCustomer is the right
approach and is the best fit compared to other options as they are both modular and same
time got the names decoupled from backend system and it has covered all requirements a
System API needs.
An enterprise is embarking on the API-led digital transformation journey, and the central IT team has started to define System APIs. Currently there is no Enterprise Data Model being defined within the enterprise, and the definition of a clean Bounded Context Data Model requires too much effort. According to MuleSoft's recommended guidelines, how should the System API data model be defined?
A. If there are misspellings of the data fields in the back-end system, Systerm APIs should not correct it, and expose it as-is to mirror the back-end systems
B. The data model of the System APIs should make use of data types that approximately mirror those from the back-end systems
C. The data model should define its own naming convention, and not follow the same naming as the back-end systems
D. The System APIs should expose all back-end system fields
Explanation: When defining data models for System APIs without an established
Enterprise Data Model, MuleSoft recommends mirroring the back-end systems' data
types to achieve quick and effective integration without adding complexity. This approach
has several benefits:
| Page 1 out of 19 Pages |