When can CloudHub Object Store v2 be used?
A. To store an unlimited number of key-value pairs
B. To store payloads with an average size greater than 15MB
C. To store information in Mule 4 Object Store v1
D. To store key-value pairs with keys up to 300 characters
Explanation: CloudHub Object Store v2 is a managed key-value store provided by
MuleSoft to support various use cases where temporary data storage is required. Here’s
why Option D is correct:
Key Length Support: Object Store v2 allows storage of keys with a length of up to
300 characters, making it suitable for applications needing flexible and descriptive
keys.
Limitations on Size:
Key-Value Limits: Object Store v2 is designed for moderate, transient storage
needs, and does not support unlimited storage. Thus, Option A is incorrect.
Backward Compatibility: Object Store v2 does not support Mule 4 applications
running Object Store v1. Option C is incorrect as Object Store v1 and v2 are
distinct.
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
An established communications company is beginning its API-led connectivity journey, The
company has been using a successful Enterprise Data Model for many years. The company has identified a self-service account management app as the first effort for APIled,
and it has identified the following APIs.
A. Customer SAPI
B. Customer Lookup PAPI
C. Mobile Account Management EAPI
D. Service SAPI
Explanation: In the API-led connectivity approach, APIs are categorized into Experience,
Process, and System layers:
Enterprise Data Model Scope:
Why Option C is Correct:
Explanation of Incorrect Options:
References:
For additional guidance, review MuleSoft's best practices on API-led
connectivity and data modeling.
An API implementation is being designed that must invoke an Order API, which is known to
repeatedly experience downtime.
For this reason, a fallback API is to be called when the Order API is unavailable.
What approach to designing the invocation of the fallback API provides the best resilience?
A.
Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API
B.
Create a separate entry for the Order API in API Manager, and then invoke this API as a
fallback API if the primary Order API is unavailable
C.
Redirect client requests through an HTTP 307 Temporary Redirect status code to the
fallback API whenever the Order API is unavailable
D.
Set an option in the HTTP Requester component that invokes the Order API to instead
invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from
the Order API
Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API
Explanation: Explanation
Correct Answer: Search Anypoint exchange for a suitable existing fallback API, and then
implement invocations to this fallback API in addition to the order API
*****************************************
>> It is not ideal and good approach, until unless there is a pre-approved agreement with
the API clients that they will receive a HTTP 3xx temporary redirect status code and they
have to implement fallback logic their side to call another API.
>> Creating separate entry of same Order API in API manager would just create an
another instance of it on top of same API implementation. So, it does NO GOOD by using
clone od same API as a fallback API. Fallback API should be ideally a different API
implementation that is not same as primary one.
>> There is NO option currently provided by Anypoint HTTP Connector that allows us to
invoke a fallback API when we receive certain HTTP status codes in response.
The only statement TRUE in the given options is to Search Anypoint exchange for a
suitable existing fallback API, and then implement invocations to this fallback API in
addition to the order API.
In which layer of API-led connectivity, does the business logic orchestration reside?
A.
System Layer
B.
Experience Layer
C.
Process Layer
Process Layer
Explanation: Explanation
Correct Answer: Process Layer
*****************************************
>> Experience layer is dedicated for enrichment of end user experience. This layer is to
meet the needs of different API clients/ consumers.
>> System layer is dedicated to APIs which are modular in nature and implement/ expose
various individual functionalities of backend systems
>> Process layer is the place where simple or complex business orchestration logic is
written by invoking one or many System layer modular APIs
So, Process Layer is the right answer.
What is most likely NOT a characteristic of an integration test for a REST API
implementation?
A.
The test needs all source and/or target systems configured and accessible
B.
The test runs immediately after the Mule application has been compiled and packaged
C.
The test is triggered by an external HTTP request
D.
The test prepares a known request payload and validates the response payload
The test runs immediately after the Mule application has been compiled and packaged
Explanation: Explanation
Correct Answer: The test runs immediately after the Mule application has been compiled
and packaged
*****************************************
>> Integration tests are the last layer of tests we need to add to be fully covered.
>> These tests actually run against Mule running with your full configuration in place and are tested from external source as they work in PROD.
>> These tests exercise the application as a whole with actual transports enabled. So,
external systems are affected when these tests run.
So, these tests do NOT run immediately after the Mule application has been compiled and
packaged.
FYI... Unit Tests are the one that run immediately after the Mule application has been
compiled and packaged.
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies#integrationtesting
| Page 1 out of 19 Pages |