Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 1-Dec-2025



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?


A.

se a CloudHub autoscaling policy to add CloudHub workers


B.

Use a CloudHub autoscaling policy to increase the size of the CloudHub worker


C.

Increase the size of the CloudHub worker(s)


D.

Increase the number of CloudHub workers





D.
  

Increase the number of CloudHub workers



Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.

What is typically NOT a function of the APIs created within the framework called API-led connectivity?


A.

They provide an additional layer of resilience on top of the underlying backend system,
thereby insulating clients from extended failure of these systems.


B.

They allow for innovation at the user Interface level by consuming the underlying assets
without being aware of how data Is being extracted from backend systems.


C.

They reduce the dependency on the underlying backend systems by helping unlock data
from backend systems In a reusable and consumable way.


D.

They can compose data from various sources and combine them with orchestration logic to create higher level value.





A.
  

They provide an additional layer of resilience on top of the underlying backend system,
thereby insulating clients from extended failure of these systems.



Explanation: Explanation
Correct Answer: They provide an additional layer of resilience on top of the underlying
backend system, thereby insulating clients from extended failure of these systems.
*****************************************
In API-led connectivity,
>> Experience APIs - allow for innovation at the user interface level by consuming the
underlying assets without being aware of how data is being extracted from backend
systems.
>> Process APIs - compose data from various sources and combine them with
orchestration logic to create higher level value
>> System APIs - reduce the dependency on the underlying backend systems by helping
unlock data from backend systems in a reusable and consumable way.
However, they NEVER promise that they provide an additional layer of resilience on top of
the underlying backend system, thereby insulating clients from extended failure of these
systems.
https://dzone.com/articles/api-led-connectivity-with-mule

What API policy would be LEAST LIKELY used when designing an Experience API that is intended to work with a consumer mobile phone or tablet application?


A.

OAuth 2.0 access token enforcement


B.

Client ID enforcement


C.

JSON threat protection


D.

IPwhitellst





D.
  

IPwhitellst



Explanation: Explanation
Correct Answer: IP whitelist
*****************************************
>> OAuth 2.0 access token and Client ID enforcement policies are VERY common to apply
on Experience APIs as API consumers need to register and access the APIs using one of
these mechanisms
>> JSON threat protection is also VERY common policy to apply on Experience APIs to
prevent bad or suspicious payloads hitting the API implementations.
>> IP whitelisting policy is usually very common in Process and System APIs to only
whitelist the IP range inside the local VPC. But also applied occassionally on some
experience APIs where the End User/ API Consumers are FIXED.
>> When we know the API consumers upfront who are going to access certain Experience
APIs, then we can request for static IPs from such consumers and whitelist them to prevent
anyone else hitting the API.
However, the experience API given in the question/ scenario is intended to work with a
consumer mobile phone or tablet application. Which means, there is no way we can know
all possible IPs that are to be whitelisted as mobile phones and tablets can so many in
number and any device in the city/state/country/globe.
So, It is very LEAST LIKELY to apply IP Whitelisting on such Experience APIs whose
consumers are typically Mobile Phones or Tablets.

An API experiences a high rate of client requests (TPS) vwth small message paytoads.
How can usage limits be imposed on the API based on the type of client application?


A.

Use an SLA-based rate limiting policy and assign a client application to a matching SLA
tier based on its type


B.

Use a spike control policy that limits the number of requests for each client application
type


C.

Use a cross-origin resource sharing (CORS) policy to limit resource sharing between
client applications, configured by the client application type


D.

Use a rate limiting policy and a client ID enforcement policy, each configured by the
client application type





A.
  

Use an SLA-based rate limiting policy and assign a client application to a matching SLA
tier based on its type



Explanation: Correct Answer: Use an SLA-based rate limiting policy and assign a client
application to a matching SLA tier based on its type.
*****************************************
>> SLA tiers will come into play whenever any limits to be imposed on APIs based on client
type
Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-slabased-
policies

An Order API must be designed that contains significant amounts of integration logic and
involves the invocation of the Product API.
The power relationship between Order API and Product API is one of "Customer/Supplier",
because the Product API is used heavily throughout the organization and is developed by a
dedicated development team located in the office of the CTO.
What strategy should be used to deal with the API data model of the Product API within the
Order API?


A.

Convince the development team of the Product API to adopt the API data model of the Order API such that the integration logic of the Order API can work with one consistent internal data model


B.

Work with the API data types of the Product API directly when implementing the integration logic of the Order API such that the Order API uses the same (unchanged) data types as the Product API


C.

Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API


D.

Start an organization-wide data modeling initiative that will result in an Enterprise Data
Model that will then be used in both the Product API and the Order API





C.
  

Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API



Explanation: Explanation
Correct Answer: Convince the development team of the product API to adopt the API data
model of the Order API such that integration logic of the Order API can work with one
consistent internal data model
*****************************************
Key details to note from the given scenario:
>> Power relationship between Order API and Product API is customer/supplier
So, as per below rules of "Power Relationships", the caller (in this case Order API) would
request for features to the called (Product API team) and the Product API team would need
to accomodate those requests.

Which component monitors APIs and endpoints at scheduled intervals, receives reports about whether tests pass or fail, and displays statistics about API and endpoint performance?


A. API Analytics


B. Anypoint Monitoring dashboards


C. APT Functional Monitoring


D. Anypoint Runtime Manager alerts





C.
  APT Functional Monitoring

Explanation:

  • Understanding API Functional Monitoring:
  • Component Features:
  • Evaluating the Options:
Conclusion:
Refer to MuleSoft documentation on API Functional Monitoring for further guidance on setting up and configuring these tests in Anypoint Platform.

Refer to the exhibit.

An organization uses one specific CloudHub (AWS) region for all CloudHub deployments.
How are CloudHub workers assigned to availability zones (AZs) when the organization's
Mule applications are deployed to CloudHub in that region?


A.

Workers belonging to a given environment are assigned to the same AZ within that region


B.

AZs are selected as part of the Mule application's deployment configuration


C.

Workers are randomly distributed across available AZs within that region


D.

An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ





D.
  

An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ



Explanation: Explanation
Correct Answer: Workers are randomly distributed across available AZs within that region.
*****************************************
>> Currently, we only have control to choose which AWS Region to choose but there is no
control at all using any configurations or deployment options to decide what Availability
Zone (AZ) to assign to what worker.
>> There are NO fixed or implicit rules on platform too w.r.t assignment of AZ to workers
based on environment or application.
>> They are completely assigned in random. However, cloudhub definitely ensures that
HA is achieved by assigning the workers to more than on AZ so that all workers are not
assigned to same AZ for same application.
: https://help.mulesoft.com/s/question/0D52T000051rqDj/one-cloudhub-aws-region-howcloudhub-
workers-are-assigned-to-availability-zones-azs-
Graphical user interface, application
Description automatically generated
Bottom of Form
Top of Form

 

Refer to the exhibit.

what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?


A.

Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications


B.

The MuleSoft-hosted Shared Load Balancer can be used to load balance API
invocations to the Mule runtimes


C.

API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane


D.

Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure





C.
  

API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane



Explanation: Explanation
Correct Answer: API implementations can run successfully in customer-hosted Mule
runtimes, even when they are unable to communicate with the control plane.
*****************************************
>> We CANNOT use Shared Load balancer to load balance APIs on customer hosted
runtimes


Page 1 out of 19 Pages