Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 3-Nov-2025



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

What should be ensured before sharing an API through a public Anypoint Exchange portal?


A.

The visibility level of the API instances of that API that need to be publicly accessible should be set to public visibility


B.

The users needing access to the API should be added to the appropriate role in
Anypoint Platform


C.

The API should be functional with at least an initial implementation deployed and accessible for users to interact with


D.

The API should be secured using one of the supported authentication/authorization mechanisms to ensure that data is not compromised





A.
  

The visibility level of the API instances of that API that need to be publicly accessible should be set to public visibility



Explanation: Explanation

What is the most performant out-of-the-box solution in Anypoint Platform to track
transaction state in an asynchronously executing long-running process implemented as a
Mule application deployed to multiple CloudHub workers?


A.

Redis distributed cache


B.

java.util.WeakHashMap


C.

Persistent Object Store


D.

File-based storage





C.
  

Persistent Object Store



Explanation: Correct Answer: Persistent Object Store
*****************************************
>> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint
Platform
>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform
>> java.util.WeakHashMap needs a completely custom implementation of cache from
scratch using Java code and is limited to the JVM where it is running. Which means the
state in the cache is not worker aware when running on multiple workers. This type of
cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among
multiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap
>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform
which is performant as well as worker aware among multiple workers running on
CloudHub. https://docs.mulesoft.com/object-store/
So, Persistent Object Store is the right answer.

A company has created a successful enterprise data model (EDM). The company is
committed to building an application network by adopting modern APIs as a core enabler of
the company's IT operating model. At what API tiers (experience, process, system) should
the company require reusing the EDM when designing modern API data models?


A.

At the experience and process tiers


B.

At the experience and system tiers


C.

At the process and system tiers


D.

At the experience, process, and system tiers





C.
  

At the process and system tiers



Explanation: Explanation Correct Answer: At the process and system tiers
*****************************************
>> Experience Layer APIs are modeled and designed exclusively for the end user's
experience. So, the data models of experience layer vary based on the nature and type of
such API consumer. For example, Mobile consumers will need light-weight data models to
transfer with ease on the wire, where as web-based consumers will need detailed data
models to render most of the info on web pages, so on. So, enterprise data models fit for
the purpose of canonical models but not of good use for experience APIs.
>> That is why, EDMs should be used extensively in process and system tiers but NOT in
experience tier.

A customer wants to host their MuleSoft applications in CloudHub 1.0, and these applications should be available at the domain https://api.acmecorp.com.
After creating a dedicated load balancer (DLB) called acme-dib-prod, which further action must the customer take to complete the configuration?


A. Configure the DLB with a TLS certificate for api.acmecorp.com and create an A record for api.acmecorp.com to the public IP addresses associated with their DLB


B. Configure the DLB with a TLS certificate for api.acmecorp.com and create a CNAME record from api.acmecorp.com to acme-dib-prod.|lb.anypointdns.net


C. Configure the DLB with a TLS certificate for acme-dib-prod.Jb.anypointdns.net and create a CNAME record from api.acmecorp:com to acme-dlb-prod.lb.anypointdns.net


D. Configure the DLB with a TLS certificate for aplacmecorp.com and create a CNAME record from api.aomecorp.com to acme-dib-prod.ei.cloubhub.io





B.
  Configure the DLB with a TLS certificate for api.acmecorp.com and create a CNAME record from api.acmecorp.com to acme-dib-prod.|lb.anypointdns.net

Explanation:
When setting up a custom domain for MuleSoft applications hosted on CloudHub 1.0 using a Dedicated Load Balancer (DLB), follow these steps:
Set Up the TLS Certificate: Configure the DLB (acme-dib-prod) with a TLS certificate that covers the custom domain api.acmecorp.com. This certificate will allow HTTPS traffic to be securely directed through the DLB to your Mule applications.

  • DNS Configuration with CNAME:
  • Why Option B is Correct:
  • Explanation of Incorrect Options:

What is true about automating interactions with Anypoint Platform using tools such as Anypoint Platform REST APIs, Anypoint CU, or the Mule Maven plugin?


A.

Access to Anypoint Platform APIs and Anypoint CU can be controlled separately through the roles and permissions in Anypoint Platform, so that specific users can get access to Anypoint CLI white others get access to the platform APIs


B.

Anypoint Platform APIs can ONLY automate interactions with CloudHub, while the Mule Maven plugin is required for deployment to customer-hosted Mule runtimes


C.

By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications


D.

API policies can be applied to the Anypoint Platform APIs so that ONLY certain LOBs have access to specific functions





C.
  

By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications



Explanation: Explanation
Correct Answer: By default, the Anypoint CLI and Mule Maven plugin are NOT included in
the Mule runtime, so are NOT available to be used by deployed Mule applications
*****************************************
>> We CANNOT apply API policies to the Anypoint Platform APIs like we can do on our
custom written API instances. So, option suggesting this is FALSE.
>> Anypoint Platform APIs can be used for automating interactions with both CloudHub
and customer-hosted Mule runtimes. Not JUST the CloudHub. So, option opposing this is
FALSE.
>> Mule Maven plugin is NOT mandatory for deployment to customer-hosted Mule
runtimes. It just helps your CI/CD to have smoother automation. But not a compulsory
requirement to deploy. So, option opposing this is FALSE.
>> We DO NOT have any such special roles and permissions on the platform to separately
control access for some users to have Anypoint CLI and others to have Anypoint Platform
APIs. With proper general roles/permissions (API Owner, Cloudhub Admin etc..), one can
use any of the options (Anypoint CLI or Platform APIs). So, option suggesting this is
FALSE.
Only TRUE statement given in the choices is that - Anypoint CLI and Mule Maven plugin
are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule
applications.
Maven is part of Studio or you can use other Maven installation for development.
CLI is convenience only. It is one of many ways how to install app to the runtime.
These are definitely NOT part of anything except your process of deployment or
automation.

A system API is deployed to a primary environment as well as to a disaster recovery (DR)
environment, with different DNS names in each environment. A process API is a client to
the system API and is being rate limited by the system API, with different limits in each of
the environments. The system API's DR environment provides only 20% of the rate limiting
offered by the primary environment. What is the best API fault-tolerant invocation strategy
to reduce overall errors in the process API, given these conditions and constraints?


A.

Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment


B.

Invoke the system API deployed to the primary environment; add retry logic to the process
API to handle intermittent failures by invoking the system API deployed to the DR
environment


C.

In parallel, invoke the system API deployed to the primary environment and the system API
deployed to the DR environment; add timeout and retry logic to the process API to avoid
intermittent failures; add logic to the process API to combine the results


D.

Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API
deployed to the DR environment





A.
  

Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment



Explanation: Explanation
Correct Answer: Invoke the system API deployed to the primary environment; add timeout
and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the
system API deployed to the DR environment
*****************************************
There is one important consideration to be noted in the question which is - System API in
DR environment provides only 20% of the rate limiting offered by the primary environment.
So, comparitively, very less calls will be allowed into the DR environment API opposed to
its primary environment. With this in mind, lets analyse what is the right and best faulttolerant
invocation strategy.
1. Invoking both the system APIs in parallel is definitely NOT a feasible approach because
of the 20% limitation we have on DR environment. Calling in parallel every time would
easily and quickly exhaust the rate limits on DR environment and may not give chance to
genuine intermittent error scenarios to let in during the time of need.
2. Another option given is suggesting to add timeout and retry logic to process API while
invoking primary environment's system API. This is good so far. However, when all retries
failed, the option is suggesting to invoke the copy of process API on DR environment which
is not right or recommended. Only system API is the one to be considered for fallback and
not the whole process API. Process APIs usually have lot of heavy orchestration calling
many other APIs which we do not want to repeat again by calling DR's process API. So this
option is NOT right.
3. One more option given is suggesting to add the retry (no timeout) logic to process API to
directly retry on DR environment's system API instead of retrying the primary environment
system API first. This is not at all a proper fallback. A proper fallback should occur only
after all retries are performed and exhausted on Primary environment first. But here, the
option is suggesting to directly retry fallback API on first failure itself without trying main
API. So, this option is NOT right too.
This leaves us one option which is right and best fit.
- Invoke the system API deployed to the primary environment
- Add Timeout and Retry logic on it in process API
- If it fails even after all retries, then invoke the system API deployed to the DR
environment.

An API implementation is being designed that must invoke an Order API, which is known to
repeatedly experience downtime.
For this reason, a fallback API is to be called when the Order API is unavailable.
What approach to designing the invocation of the fallback API provides the best resilience?


A.

Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API


B.

Create a separate entry for the Order API in API Manager, and then invoke this API as a
fallback API if the primary Order API is unavailable


C.

Redirect client requests through an HTTP 307 Temporary Redirect status code to the
fallback API whenever the Order API is unavailable


D.

Set an option in the HTTP Requester component that invokes the Order API to instead
invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from
the Order API





A.
  

Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API



Explanation: Explanation
Correct Answer: Search Anypoint exchange for a suitable existing fallback API, and then
implement invocations to this fallback API in addition to the order API
*****************************************
>> It is not ideal and good approach, until unless there is a pre-approved agreement with
the API clients that they will receive a HTTP 3xx temporary redirect status code and they
have to implement fallback logic their side to call another API.
>> Creating separate entry of same Order API in API manager would just create an
another instance of it on top of same API implementation. So, it does NO GOOD by using
clone od same API as a fallback API. Fallback API should be ideally a different API
implementation that is not same as primary one.
>> There is NO option currently provided by Anypoint HTTP Connector that allows us to
invoke a fallback API when we receive certain HTTP status codes in response.
The only statement TRUE in the given options is to Search Anypoint exchange for a
suitable existing fallback API, and then implement invocations to this fallback API in
addition to the order API.

What correctly characterizes unit tests of Mule applications?


A.

They test the validity of input and output of source and target systems


B.

They must be run in a unit testing environment with dedicated Mule runtimes for the environment


C.

They must be triggered by an external client tool or event source


D.

They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity





D.
  

They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity



Explanation: Explanation
Correct Answer: They are typically written using MUnit to run in an embedded Mule runtime
that does not require external connectivity.
*****************************************
Below TWO are characteristics of Integration Tests but NOT unit tests:
>> They test the validity of input and output of source and target systems.
>> They must be triggered by an external client tool or event source.
It is NOT TRUE that Unit Tests must be run in a unit testing environment with dedicated
Mule runtimes for the environment.
MuleSoft offers MUnit for writing Unit Tests and they run in an embedded Mule Runtime
without needing any separate/ dedicated Runtimes to execute them. They also do NOT
need any external connectivity as MUnit supports mocking via stubs.
https://dzone.com/articles/munit-framework


Page 1 out of 19 Pages