Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 21-Jan-2026



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

A team is planning to enhance an Experience API specification, and they are following APIled connectivity design principles. What is their motivation for enhancing the API?


A. The primary API consumer wants certain kinds of endpoints changed from the Center for Enablement standard to the consumer system standard


B. The underlying System API is updated to provide more detailed data for several heavily used resources


C. An IP Allowlist policy is being added to the API instances in the Development and Staging environments


D. A Canonical Data Model is being adopted that impacts several types of data included in the API





D.
  A Canonical Data Model is being adopted that impacts several types of data included in the API

Explanation:
In API-led design, an Experience API is enhanced to improve how data is delivered to end-user applications. One primary reason to enhance an Experience API is when new data standards, such as a Canonical Data Model, are adopted. Here’s why:

  • Canonical Data Model (CDM):
  • Explanation of Correct Answer (D):
  • Explanation of Incorrect Options:
References:
For more details on the use of Canonical Data Models in API-led architecture, refer to MuleSoft’s guidelines on data standardization and Experience API best practices.

A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?


A.

Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore


B.

Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%


C.

Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers


D.

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%





D.
  

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%



Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

A Mule application implements an API. The Mule application has an HTTP Listener whose connector configuration sets the HTTPS protocol and hard-codes the port value. The Mule application is deployed to an Anypoint VPC and uses the CloudHub 1.0 Shared Load Balancer (SLB) for all incoming traffic. Which port number must be assigned to the HTTP Listener's connector configuration so that the Mule application properly receives HTTPS API invocations routed through the SLB?


A. 8082


B. 8092


C. 80


D. 443





B.
  8092

Explanation:
When using CloudHub 1.0’s Shared Load Balancer (SLB) for a Mule application configured with HTTPS in an Anypoint VPC, specific ports must be configured for the application to correctly route incoming traffic:

  • Port Requirement for SLB:
  • Why Option B is Correct:
  • Explanation of Incorrect Options:
References
For more information on the Shared Load Balancer port configurations, refer to MuleSoft’s documentation on CloudHub and VPC load balancer requirements.

A Rate Limiting policy is applied to an API implementation to protect the back-end system. Recently, there have been surges in demand that cause some API client POST requests to the API implementation to be rejected with policy-related errors, causing delays and complications to the API clients. How should the API policies that are applied to the API implementation be changed to reduce the frequency of errors returned to API clients, while still protecting the back-end system?


A. Keep the Rate Limiting policy and add 9 Client ID Enforcement policy


B. Remove the Rate Limiting policy and add an HTTP Caching policy


C. Remove the Rate Limiting policy and add a Spike Control policy


D. Keep the Rate Limiting policy and add an SLA-based Spike Control policy





D.
  Keep the Rate Limiting policy and add an SLA-based Spike Control policy

Explanation:
When managing high traffic to an API, especially with POST requests, it is crucial to ensure the API’s policies both protect the back-end systems and provide a smooth client experience. Here’s the approach to reducing errors:
Rate Limiting Policy: This policy enforces a limit on the number of requests within a defined time period. However, rate limiting alone may cause clients to hit limits during demand surges, leading to errors.

  • Adding an SLA-based Spike Control Policy:
  • Why Option D is Correct:
  • Explanation of Incorrect Options:

A large company wants to implement IT infrastructure in its own data center, based on the corporate IT policy requirements that data and metadata reside locally. Which combination of Mule control plane and Mule runtime plane(s) meets the requirements?


A. Anypoint Platform Private Cloud Edition for the control plane and the MuleSoft-hosted runtime plane


B. The MuleSoft-hosted control plane and Anypoint Runtime Fabric for the runtime plane


C. The MuleSoft-hosted control plane and customer-hosted Mule runtimes for the runtime plane


D. Anypoint Platform Private Cloud Edition for the control plane and customer-hosted Mule runtimes for the runtime plane





D.
  Anypoint Platform Private Cloud Edition for the control plane and customer-hosted Mule runtimes for the runtime plane

Explanation:

  • Understanding Control and Runtime Planes
  • Evaluating the Options
Conclusion:
Refer to MuleSoft's documentation on Private Cloud Edition deployment and on-premise runtime configurations for further details.

A set of tests must be performed prior to deploying API implementations to a staging
environment. Due to data security and access restrictions, untested APIs cannot be
granted access to the backend systems, so instead mocked data must be used for these
tests. The amount of available mocked data and its contents is sufficient to entirely test the
API implementations with no active connections to the backend systems. What type of
tests should be used to incorporate this mocked data?


A.

Integration tests


B.

Performance tests


C.

Functional tests (Blackbox)


D.

Unit tests (Whitebox)





D.
  

Unit tests (Whitebox)



Explanation: Explanation
Correct Answer: Unit tests (Whitebox)
*****************************************
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies
As per general IT testing practice and MuleSoft recommended practice, Integration and
Performance tests should be done on full end to end setup for right evaluation. Which
means all end systems should be connected while doing the tests. So, these options are
OUT and we are left with Unit Tests and Functional Tests.
As per attached reference documentation from MuleSoft:
Unit Tests - are limited to the code that can be realistically exercised without the need to
run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows,
Custom transformers, Custom components, Custom expression evaluators etc.
Functional Tests - are those that most extensively exercise your application configuration.
In these tests, you have the freedom and tools for simulating happy and unhappy paths.
You also have the possibility to create stubs for target services and make them success or
fail to easily simulate happy and unhappy paths respectively.
As the scenario in the question demands for API implementation to be tested before
deployment to Staging and also clearly indicates that there is enough/ sufficient amount of
mock data to test the various components of API implementations with no active
connections to the backend systems, Unit Tests are the one to be used to incorporate this

What Anypoint Connectors support transactions?


A.

Database, JMS, VM


B.

Database, 3MS, HTTP


C.

Database, JMS, VM, SFTP


D.

Database, VM, File





A.
  

Database, JMS, VM



An API experiences a high rate of client requests (TPS) vwth small message paytoads.
How can usage limits be imposed on the API based on the type of client application?


A.

Use an SLA-based rate limiting policy and assign a client application to a matching SLA
tier based on its type


B.

Use a spike control policy that limits the number of requests for each client application
type


C.

Use a cross-origin resource sharing (CORS) policy to limit resource sharing between
client applications, configured by the client application type


D.

Use a rate limiting policy and a client ID enforcement policy, each configured by the
client application type





A.
  

Use an SLA-based rate limiting policy and assign a client application to a matching SLA
tier based on its type



Explanation: Correct Answer: Use an SLA-based rate limiting policy and assign a client
application to a matching SLA tier based on its type.
*****************************************
>> SLA tiers will come into play whenever any limits to be imposed on APIs based on client
type
Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-slabased-
policies


Page 1 out of 19 Pages