When designing an upstream API and its implementation, the development team has been
advised to NOT set timeouts when invoking a downstream API, because that downstream
API has no SLA that can be relied upon. This is the only downstream API dependency of
that upstream API.
Assume the downstream API runs uninterrupted without crashing. What is the impact of
this advice?
A.
An SLA for the upstream API CANNOT be provided
B.
The invocation of the downstream API will run to completion without timing out
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes
D.
A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in
which the downstream API implementation executes
An SLA for the upstream API CANNOT be provided
Explanation: Explanation
Correct Answer: An SLA for the upstream API CANNOT be provided.
*****************************************
>> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10
seconds). NOT 500 ms.
>> Mule runtime does NOT apply any such "load-dependent" timeouts. There is no such
behavior currently in Mule.
>> As there is default 10000 ms time out for HTTP connector, we CANNOT always
guarantee that the invocation of the downstream API will run to completion without timing
out due to its unreliable SLA times. If the response time crosses 10 seconds then the
request may time out.
The main impact due to this is that a proper SLA for the upstream API CANNOT be
provided.
Reference: https://docs.mulesoft.com/http-connector/1.5/http-documentation#parameters-3
The asset version 2.0.0 of the Order API is successfully published in Exchange and configured in API Manager with the Autodiscovery API ID correctly linked to the API implementation, A new GET method is added to the existing API specification, and after updates, the asset version of the Order API is 2.0.1. What happens to the Autodiscovery API ID when the new asset version is updated in API Manager?
A. The API ID changes, but no changes are needed to the API implementation for the new asset version in the API Autediscovery global element because the API ID is automatically updated
B. The APL ID changes, so the API implementation must be updated with the latest API ID for the new asset version in the API Autodiscovery global element
C. The APLID does not change, so no changes to the APT implementation are needed for the new asset version in the API Autodiscovery global element
D. The APL ID does not change, but the API implementation must be updated in the AP] Autodiscovery global element to indicate the new asset version 2.0.4
Explanation:
Understanding API Autodiscovery in MuleSoft:
Effect of Asset Version Update on API Autodiscovery:
Evaluating the Options:
A customer has an ELA contract with MuleSoft. An API deployed to CloudHub is consistently experiencing performance issues. Based on the root cause analysis, it is determined that autoscaling needs to be applied. How can this be achieved?
A. Configure a policy so that when the number of HTTP requests reaches a certain threshold the number of workers/replicas increases (horizontal scaling)
B. Configure two separate policies: When CPU and memory reach certain threshold, increase the worker/replica type (vertical sealing) and the number of workers/replicas (horizontal sealing)
C. Configure a policy based on CPU usage so that CloudHub auto-adjusts the number of workers/replicas (horizontal scaling)
D. Configure a policy so that when the response time reaches a certain threshold the worker/replica type increases (vertical scaling)
Explanation:
In MuleSoft CloudHub, autoscaling is essential to managing application load
efficiently. CloudHub supports horizontal scaling based on CPU usage, which is wellsuited
to applications experiencing variable demand and needing responsive resource
allocation.
An API implementation is deployed on a single worker on CloudHub and invoked by
external API clients (outside of CloudHub). How can an alert be set up that is guaranteed to
trigger AS SOON AS that API implementation stops responding to API invocations?
A.
Implement a heartbeat/health check within the API and invoke it from outside the Anypoint Platform and alert when the heartbeat does not respond
B.
Configure a "worker not responding" alert in Anypoint Runtime Manager
C.
Handle API invocation exceptions within the calling API client and raise an alert from that API client when the API Is unavailable
D.
Create an alert for when the API receives no requests within a specified time period
Configure a "worker not responding" alert in Anypoint Runtime Manager
Explanation: Explanation
Correct Answer: Configure a “Worker not responding” alert in Anypoint Runtime Manager.
*****************************************
>> All the options eventually helps to generate the alert required when the application stops
responding.
>> However, handling exceptions within calling API and then raising alert from API client is
inappropriate and silly. There could be many API clients invoking the API implementation
and it is not ideal to have this setup consistently in all of them. Not a realistic way to do.
>> Implementing a health check/ heartbeat with in the API and calling from outside to
detmine the health sounds OK but needs extra setup for it and same time there are very
good chances of generating false alarms when there are any intermittent network issues
between external tool calling the health check API on API implementation. The API
implementation itself may not have any issues but due to some other factors some false
alarms may go out.
>> Creating an alert in API Manager when the API receives no requests within a specified
time period would actually generate realistic alerts but even here some false alarms may
go out when there are genuinely no requests from API clients.
The best and right way to achieve this requirement is to setup an alert on Runtime
Manager with a condition "Worker not responding". This would generate an alert
AS SOON AS the workers become unresponsive.
An established communications company is beginning its API-led connectivity journey, The
company has been using a successful Enterprise Data Model for many years. The company has identified a self-service account management app as the first effort for APIled,
and it has identified the following APIs.
A. Customer SAPI
B. Customer Lookup PAPI
C. Mobile Account Management EAPI
D. Service SAPI
Explanation: In the API-led connectivity approach, APIs are categorized into Experience,
Process, and System layers:
Enterprise Data Model Scope:
Why Option C is Correct:
Explanation of Incorrect Options:
References:
For additional guidance, review MuleSoft's best practices on API-led
connectivity and data modeling.
An organization is implementing a Quote of the Day API that caches today's quote.
What scenario can use the GoudHub Object Store via the Object Store connector to persist
the cache's state?
A.
When there are three CloudHub deployments of the API implementation to three
separate CloudHub regions that must share the cache state
B.
When there are two CloudHub deployments of the API implementation by two Anypoint
Platform business groups to the same CloudHub region that must share the cache state
C.
When there is one deployment of the API implementation to CloudHub and anottV
deployment to a customer-hosted Mule runtime that must share the cache state
D.
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
Explanation: Explanation
Correct Answer: When there is one CloudHub deployment of the API implementation to
three CloudHub workers that must share the cache state.
*****************************************
Key details in the scenario:
>> Use the CloudHub Object Store via the Object Store connector
Considering above details:
>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.
>> We CANNOT use an application's CloudHub Object Store to be shared among multiple
Mule applications running in different Regions or Business Groups or Customer-hosted
Mule Runtimes by using Object Store connector.
>> If it is really necessary and very badly needed, then Anypoint Platform supports a way
by allowing access to CloudHub Object Store of another application using Object Store
REST API. But NOT using Object Store connector.
So, the only scenario where we can use the CloudHub Object Store via the Object Store
connector to persist the cache’s state is when there is one CloudHub deployment of the
API implementation to multiple CloudHub workers that must share the cache state
What Anypoint Platform Capabilities listed below fall under APIs and API
Invocations/Consumers category? Select TWO.
A.
API Operations and Management
B.
API Runtime Execution and Hosting
C.
API Consumer Engagement
D.
API Design and Development
API Design and Development
Explanation: Explanation
Correct Answers: API Design and Development and API Runtime Execution and Hosting
*****************************************
>> API Design and Development - Anypoint Studio, Anypoint Design Center, Anypoint
Connectors
>> API Runtime Execution and Hosting - Mule Runtimes, CloudHub, Runtime Services
>> API Operations and Management - Anypoint API Manager, Anypoint Exchange
>> API Consumer Management - API Contracts, Public Portals, Anypoint Exchange, API
Notebooks
What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?
A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
C.
The FQDNs are determined by the application name, but can be modified by an
administrator after deployment
D.
The FQDNs are determined by both the application name and the Anypoint Platform
organization
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
Explanation: Explanation
Correct Answer: The FQDNs are determined by the application name chosen,
IRRESPECTIVE of the region
*****************************************
>> When deploying applications to Shared Worker Cloud, the FQDN are always
determined by application name chosen.
>> It does NOT matter what region the app is being deployed to.
>> Although it is fact and true that the generated FQDN will have the region included in it
(Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be
used when deploying to another CloudHub region.
>> Application name should be universally unique irrespective of Region and Organization
and solely determines the FQDN for Shared Load Balancers
| Page 1 out of 19 Pages |