Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
What correctly characterizes unit tests of Mule applications?
A.
They test the validity of input and output of source and target systems
B.
They must be run in a unit testing environment with dedicated Mule runtimes for the environment
C.
They must be triggered by an external client tool or event source
D.
They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity
They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity
Explanation: Explanation
Correct Answer: They are typically written using MUnit to run in an embedded Mule runtime
that does not require external connectivity.
*****************************************
Below TWO are characteristics of Integration Tests but NOT unit tests:
>> They test the validity of input and output of source and target systems.
>> They must be triggered by an external client tool or event source.
It is NOT TRUE that Unit Tests must be run in a unit testing environment with dedicated
Mule runtimes for the environment.
MuleSoft offers MUnit for writing Unit Tests and they run in an embedded Mule Runtime
without needing any separate/ dedicated Runtimes to execute them. They also do NOT
need any external connectivity as MUnit supports mocking via stubs.
https://dzone.com/articles/munit-framework
When can CloudHub Object Store v2 be used?
A. To store an unlimited number of key-value pairs
B. To store payloads with an average size greater than 15MB
C. To store information in Mule 4 Object Store v1
D. To store key-value pairs with keys up to 300 characters
Explanation: CloudHub Object Store v2 is a managed key-value store provided by
MuleSoft to support various use cases where temporary data storage is required. Here’s
why Option D is correct:
Key Length Support: Object Store v2 allows storage of keys with a length of up to
300 characters, making it suitable for applications needing flexible and descriptive
keys.
Limitations on Size:
Key-Value Limits: Object Store v2 is designed for moderate, transient storage
needs, and does not support unlimited storage. Thus, Option A is incorrect.
Backward Compatibility: Object Store v2 does not support Mule 4 applications
running Object Store v1. Option C is incorrect as Object Store v1 and v2 are
distinct.
A TemperatureSensors API instance is defined in API Manager in the PROD environment
of the CAR_FACTORY business group. An AcmelemperatureSensors Mule
application implements this API instance and is deployed from Runtime Manager to the
PROD environment of the CAR_FACTORY business group. A policy that requires a valid
client ID and client secret is applied in API Manager to the API instance.
Where can an API consumer obtain a valid client ID and client secret to call the
AcmeTemperatureSensors Mule application?
A. In secrets manager, request access to the Shared Secret static username/password
B. In API Manager, from the PROD environment of the CAR_FACTORY business group
C. In access management, from the PROD environment of the CAR_FACTORY business group
D. In Anypoint Exchange, from an API client application that has been approved for the TemperatureSensors API instance
Explanation:
When an API policy requiring a client ID and client secret is applied to an
API instance in API Manager, API consumers must obtain these credentials through a
registered client application. Here’s how it works:
Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Correct Answer: Allow System APIs to return data that is NOT currently required by the
identified Process or Experience APIs.

An API is protected with a Client ID Enforcement policy and uses the default configuration. Access is requested for the client application to the API, and an approved contract now exists between the client application and the API. How can a consumer of this API avoid a 401 error "Unauthorized or invalid client application credentials"?
A. Send the obtained token as a header in every call
B. Send the obtained: client_id and client_secret in the request body
C. Send the obtained clent_id and clent_secret as URI parameters in every call
D. Send the obtained clent_id and client_secret in the header of every API Request call
Explanation:
When using the Client ID Enforcement policy with default settings,
MuleSoft expects the client_id and client_secret to be provided in the URI parameters of
each request. This policy is typically used to control and monitor access by validating that
each request has valid credentials. Here’s how to avoid a 401 Unauthorized error:
Which APIs can be used with DataGraph to create a unified schema?

A. APIs 1, 3, 5
B. APIs 2, 4 ,6
C. APIs 1, 2, s5, 6
D. APIs 1, 2, 3, 4
Explanation:
To create a unified schema in MuleSoft's DataGraph, APIs must be exposed
in a way that allows DataGraph to pull and consolidate data from these APIs into a single
schema accessible to consumers. DataGraph provides a federated approach, combining
multiple APIs to form a single, unified API endpoint.
In this setup:
APIs 1, 2, 3, and 4 are suitable candidates for DataGraph because they are hosted
within the Customer VPC on CloudHub and are accessible either through a
Shared Load Balancer (LB) or a Dedicated Load Balancer (DLB). Both of these
load balancers provide public access, which is a necessary condition for
DataGraph as it must access the APIs to aggregate data.
APIs 5 and 6 are hosted on Customer Hosted Server 2, which is explicitly marked
as "Not public". Since DataGraph requires API access through a publicly
reachable endpoint to aggregate them into a unified schema, APIs 5 and 6 cannot
be used with DataGraph in this configuration.
APIs 3 and 4 on Customer Hosted Server 1 appear accessible through a Shared
LB, implying public accessibility that meets DataGraph’s requirements.
By combining APIs 1, 2, 3, and 4 within DataGraph, you can create a unified schema that
enables clients to query data seamlessly from all these APIs as if it were from a single
source.
This setup allows for efficient data retrieval and can simplify API consumption by reducing
the need to call multiple APIs individually, thus optimizing performance and developer
experience.
When must an API implementation be deployed to an Anypoint VPC?
A.
When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance
B.
When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access
C.
When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin
D.
When the API Implementation must write to a persistent Object Store
When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance
| Page 1 out of 19 Pages |