What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule
runtime, and then ONLY applied to the specific API Instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the
specified environment
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
Explanation: Explanation
Correct Answer: The API policy is defined in API Manager for a specific API instance, and
then ONLY applied to the specific API instance.
*****************************************
>> Once our API specifications are ready and published to Exchange, we need to visit API
Manager and register an API instance for each API.
>> API Manager is the place where management of API aspects takes place like
addressing NFRs by enforcing policies on them.
>> We can create multiple instances for a same API and manage them differently for
different purposes.
>> One instance can have a set of API policies applied and another instance of same API
can have different set of policies applied for some other purpose.
>> These APIs and their instances are defined PER environment basis. So, one need to
manage them seperately in each environment.
>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets
promoted when promoting to higher environments using platform feature. But this is
optional only. Still one can change them per environment basis if they have to.
>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes
but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT
enforce API policies in Runtime Manager. We would need to do that via API Manager only
for a cherry picked instance in an environment.
So, based on these facts, right statement in the given choices is - "The API policy is
defined in API Manager for a specific API instance, and then ONLY applied to the specific
API instance".
Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
What is the most performant out-of-the-box solution in Anypoint Platform to track
transaction state in an asynchronously executing long-running process implemented as a
Mule application deployed to multiple CloudHub workers?
A.
Redis distributed cache
B.
java.util.WeakHashMap
C.
Persistent Object Store
D.
File-based storage
Persistent Object Store
Explanation: Correct Answer: Persistent Object Store
*****************************************
>> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint
Platform
>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform
>> java.util.WeakHashMap needs a completely custom implementation of cache from
scratch using Java code and is limited to the JVM where it is running. Which means the
state in the cache is not worker aware when running on multiple workers. This type of
cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among
multiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap
>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform
which is performant as well as worker aware among multiple workers running on
CloudHub. https://docs.mulesoft.com/object-store/
So, Persistent Object Store is the right answer.
Which out-of-the-box key performance indicator measures the success of a typical Center for Enablement and is immediately available in responses from Anypoint Platform APIs?
A. Per business group, the ratio of the number of production APT implementations deployed using a C1/CD pipeline to the number of production API implementations deployed manually
B. Per deployed API implementation, the amount of bandwidth consumed each day
C. Per published API, the number of developers that downloaded s version of the API specification
D. Per published API, the number of consumers that requested access to the API and have been approved in the Production environment
An enterprise is embarking on the API-led digital transformation journey, and the central IT team has started to define System APIs. Currently there is no Enterprise Data Model being defined within the enterprise, and the definition of a clean Bounded Context Data Model requires too much effort. According to MuleSoft's recommended guidelines, how should the System API data model be defined?
A. If there are misspellings of the data fields in the back-end system, Systerm APIs should not correct it, and expose it as-is to mirror the back-end systems
B. The data model of the System APIs should make use of data types that approximately mirror those from the back-end systems
C. The data model should define its own naming convention, and not follow the same naming as the back-end systems
D. The System APIs should expose all back-end system fields
Explanation: When defining data models for System APIs without an established
Enterprise Data Model, MuleSoft recommends mirroring the back-end systems' data
types to achieve quick and effective integration without adding complexity. This approach
has several benefits:
An existing Quoting API is defined in RAML and used by REST clients for interacting with the quoting engine. Currently there is a resource defined in the RAML that allows the creation of quotes; however, a new requirement was just received to allow for the updating of existing quotes. Which two actions need to be taken to facilitate this change so it can be processed? (Choose 2 answers)
A. Update the API implementation to accommodate the new update request
B. B. Remove the old client applications and create new client applications to account for the changes
C. Update the RAML with new method details for the update request
D. Deprecate existing versions of the API in Exchange
E. Add a new API policy to API Manager to allow access to the updated endpoint
Explanation:
To accommodate the new requirement of allowing updates to existing
quotes, the following actions should be taken:
Which two statements are true about the technology architecture of an Anypoint Virtual
Private Cloud (VPC)?
(Choose 2 answers)
A. Ports 8081 and 8082 are used
B. CIDR blacks are used
C. Anypoint VPC is responsible for load balancing the applications
D. Round-robin load balancing is used to distribute client requests across different applications
E. By default, HTTP requests can be made from the public internet to workers at port 6091
Explanation:
An Anypoint Virtual Private Cloud (VPC) provides a secure and private
networking environment for MuleSoft applications, using specific architectural elements:
| Page 1 out of 19 Pages |