Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 15-Dec-2025



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

What API policy would be LEAST LIKELY used when designing an Experience API that is intended to work with a consumer mobile phone or tablet application?


A.

OAuth 2.0 access token enforcement


B.

Client ID enforcement


C.

JSON threat protection


D.

IPwhitellst





D.
  

IPwhitellst



Explanation: Explanation
Correct Answer: IP whitelist
*****************************************
>> OAuth 2.0 access token and Client ID enforcement policies are VERY common to apply
on Experience APIs as API consumers need to register and access the APIs using one of
these mechanisms
>> JSON threat protection is also VERY common policy to apply on Experience APIs to
prevent bad or suspicious payloads hitting the API implementations.
>> IP whitelisting policy is usually very common in Process and System APIs to only
whitelist the IP range inside the local VPC. But also applied occassionally on some
experience APIs where the End User/ API Consumers are FIXED.
>> When we know the API consumers upfront who are going to access certain Experience
APIs, then we can request for static IPs from such consumers and whitelist them to prevent
anyone else hitting the API.
However, the experience API given in the question/ scenario is intended to work with a
consumer mobile phone or tablet application. Which means, there is no way we can know
all possible IPs that are to be whitelisted as mobile phones and tablets can so many in
number and any device in the city/state/country/globe.
So, It is very LEAST LIKELY to apply IP Whitelisting on such Experience APIs whose
consumers are typically Mobile Phones or Tablets.

A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?


A.

se a CloudHub autoscaling policy to add CloudHub workers


B.

Use a CloudHub autoscaling policy to increase the size of the CloudHub worker


C.

Increase the size of the CloudHub worker(s)


D.

Increase the number of CloudHub workers





D.
  

Increase the number of CloudHub workers



Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.

An API implementation is being designed that must invoke an Order API, which is known to
repeatedly experience downtime.
For this reason, a fallback API is to be called when the Order API is unavailable.
What approach to designing the invocation of the fallback API provides the best resilience?


A.

Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API


B.

Create a separate entry for the Order API in API Manager, and then invoke this API as a
fallback API if the primary Order API is unavailable


C.

Redirect client requests through an HTTP 307 Temporary Redirect status code to the
fallback API whenever the Order API is unavailable


D.

Set an option in the HTTP Requester component that invokes the Order API to instead
invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from
the Order API





A.
  

Search Anypoint Exchange for a suitable existing fallback API, and then implement
invocations to this fallback API in addition to the Order API



Explanation: Explanation
Correct Answer: Search Anypoint exchange for a suitable existing fallback API, and then
implement invocations to this fallback API in addition to the order API
*****************************************
>> It is not ideal and good approach, until unless there is a pre-approved agreement with
the API clients that they will receive a HTTP 3xx temporary redirect status code and they
have to implement fallback logic their side to call another API.
>> Creating separate entry of same Order API in API manager would just create an
another instance of it on top of same API implementation. So, it does NO GOOD by using
clone od same API as a fallback API. Fallback API should be ideally a different API
implementation that is not same as primary one.
>> There is NO option currently provided by Anypoint HTTP Connector that allows us to
invoke a fallback API when we receive certain HTTP status codes in response.
The only statement TRUE in the given options is to Search Anypoint exchange for a
suitable existing fallback API, and then implement invocations to this fallback API in
addition to the order API.

An established communications company is beginning its API-led connectivity journey, The company has been using a successful Enterprise Data Model for many years. The company has identified a self-service account management app as the first effort for APIled, and it has identified the following APIs.

  • Experience layer: Mobile Account Management EAPI, Browser Account Management EAPI
  • Process layer: Customer Lookup PAPI, Service Lookup PAPI, Account Lookup PAPI
  • System layer: Customer SAPI, Account SAPI, Product SAPI, Service SAPI
According to MuleSoft's API-led connectivity approach, which API would not be served by the Enterprise Data Model?


A. Customer SAPI


B. Customer Lookup PAPI


C. Mobile Account Management EAPI


D. Service SAPI





C.
  Mobile Account Management EAPI

Explanation: In the API-led connectivity approach, APIs are categorized into Experience, Process, and System layers:
Enterprise Data Model Scope:
Why Option C is Correct:
Explanation of Incorrect Options:
References:
For additional guidance, review MuleSoft's best practices on API-led connectivity and data modeling.

A client has several applications running on the Salesforce service cloud. The business requirement for integration is to get daily data changes from Account and Case Objects. Data needs to be moved to the client's private cloud AWS DynamoDB instance as a single JSON and the business foresees only wanting five attributes from the Account object, which has 219 attributes (some custom) and eight attributes from the Case Object. What design should be used to support the API/ Application data model?


A. Create separate entities for Account and Case Objects by mimicking all the attributes in SAPI, which are combined by the PAPI and filtered to provide JSON output containing 13 attributes.


B. Request client’s AWS project team to replicate all the attributes and create Account and Case JSON table in DynamoDB. Then create separate entities for Account and Case Objects by mimicking all the attributes in SAPI to transfer ISON data to DynamoD for respective Objects


C. Start implementing an Enterprise Data Model by defining enterprise Account and Case Objects and implement SAPI and DynamoDB tables based on the Enterprise Data Model,


D. Create separate entities for Account with five attributes and Case with eight attributes in SAPI, which are combined by the PAPI to provide JSON output containing 13 attributes.





D.
  Create separate entities for Account with five attributes and Case with eight attributes in SAPI, which are combined by the PAPI to provide JSON output containing 13 attributes.

A European company has customers all across Europe, and the IT department is migrating from an older platform to MuleSoft. The main requirements are that the new platform should allow redeployments with zero downtime and deployment of applications to multiple runtime versions, provide security and speed, and utilize Anypoint MQ as the message service. Which runtime plane should the company select based on the requirements without additional network configuration?


A. Runtime Fabric on VMs / Bare Metal for the runtime plane


B. Customer-hosted runtime plane


C. MuleSoft-hosted runtime plane (CloudHub)


D. Anypoint Runtime Fabric on Self-Managed Kubernetes for the runtime plane





C.
  MuleSoft-hosted runtime plane (CloudHub)

Explanation:
For a European company with requirements such as zero-downtime redeployment, deployment to multiple runtime versions, secure and fast performance, and the use of Anypoint MQ without additional network configuration, CloudHub is the best choice for the following reasons:

  • Zero-Downtime Redeployment: CloudHub supports zero-downtime deployment, which allows seamless redeployment of applications without impacting availability. Support for Multiple Runtime Versions: CloudHub allows deploying applications across different Mule runtime versions, giving flexibility to test and migrate applications as needed.
  • Integrated Anypoint MQ: Anypoint MQ, which is fully integrated with CloudHub, provides reliable messaging across applications. Choosing CloudHub removes the need for additional network configurations, as Anypoint MQ can be directly accessed in this hosted environment.
  • Security and Performance: CloudHub offers secure networking, automatic scaling, and optimized performance without requiring a complex setup. This is managed by MuleSoft’s infrastructure, meeting the speed and security requirements with minimal overhead.
Explanation of Incorrect Options:
References:

For more information on CloudHub’s capabilities regarding zero-downtime deployments and integration with Anypoint MQ, refer to MuleSoft documentation on CloudHub.

A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?


A.

IPwhitelist


B.

SLA-based rate limiting


C.

Auth 2 token enforcement


D.

Client ID enforcement





B.
  

SLA-based rate limiting



Explanation: Explanation
Correct Answer: SLA-based rate limiting
*****************************************
>> Client Id enforement policy is a "Compliance" related NFR and does not help in
maintaining the "Quality of Service (QoS)". It CANNOT and NOT meant for protecting the
backend systems from scalability challenges.
>> IP Whitelisting and OAuth 2.0 token enforcement are "Security" related NFRs and again
does not help in maintaining the "Quality of Service (QoS)". They CANNOT and are NOT
meant for protecting the backend systems from scalability challenges.
Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are "Quality
of Service (QOS)" related NFRs and are meant to help in protecting the backend systems
from getting overloaded.
https://dzone.com/articles/how-to-secure-apis

When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?


A.

When there is an existing Enterprise Data Model widely used across the organization


B.

When the System API can be assigned to a bounded context with a corresponding data
model


C.

When a pragmatic approach with only limited isolation from the backend system is deemed appropriate


D.

When the corresponding backend system is expected to be replaced in the near future





C.
  

When a pragmatic approach with only limited isolation from the backend system is deemed appropriate



Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API


Page 1 out of 19 Pages