Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 3-Nov-2025



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

An API with multiple API implementations (Mule applications) is deployed to both CloudHub and customer-hosted Mule runtimes. All the deployments are managed by the MuleSoft-hosted control plane. An alert needs to be triggered whenever an API implementation stops responding to API requests, even if no API clients have called the API implementation for some time. What is the most effective out-of-the-box solution to create these alerts to monitor the API implementations?


A. Create monitors in Anypoint Functional Monitoring for the API implementations, where each monitor repeatedly invokes an API implementation endpoint


B. Add code to each API client to send an Anypoint Platform REST API request to generate a custom alert in Anypoint Platform when an API invocation times out


C. Handle API invocation exceptions within the calling API client and raise an alert from that API client when such an exception is thrown


D. Configure one Worker Not Responding alert.in Anypoint Runtime Manager for all API implementations that will then monitor every API implementation





A.
  Create monitors in Anypoint Functional Monitoring for the API implementations, where each monitor repeatedly invokes an API implementation endpoint

Explanation:
In scenarios where multiple API implementations are deployed across different environments (CloudHub and customer-hosted runtimes), Anypoint Functional Monitoring is the most effective tool to monitor API availability and trigger alerts when an API implementation becomes unresponsive. Here’s how it works:

  • Using Anypoint Functional Monitoring:
  • Why Option A is Correct:
  • Explanation of Incorrect Options:
References:
For further information, refer to MuleSoft documentation on Anypoint Functional Monitoring setup and usage for API availability monitoring.

What Mule application can have API policies applied by
Anypoint Platform to the endpoint exposed by that Mule application?
A) A Mule application that accepts requests over HTTP/1.x



A.

Option A


B.

Option B


C.

Option C


D.

Option D





A.
  

Option A



Explanation: Explanation
Correct Answer: Option A
*****************************************
>> Anypoint API Manager and API policies are applicable to all types of HTTP/1.x APIs.
>> They are not applicable to WebSocket APIs, HTTP/2 APIs and gRPC APIs
Reference: https://docs.mulesoft.com/api-manager/2.x/using-policies

A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared Worker Cloud. All traffic to that Mule application must stay inside the AWS VPC. To what TCP port do API invocations to that Mule application need to be sent?


A. 443


B. 8081


C. 8091


D. 8082





D.
  8082

Explanation:
Correct Answer: 8082

  • 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private to the LOCAL VPC respectively.
  • Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.
  • 8081 is to be used when exposing your HTTP endpoint app to the internet through Shared LB
  • 8082 is to be used when exposing your HTTPS endpoint app to the internet through Shared LB
So, API invocations should be sent to port 8082 when calling this HTTPS based app.

An organization requires several APIs to be secured with OAuth 2.0, and PingFederate has been identified as the identity provider for API client authorization, The PingFederate Client Provider is configured in access management, and the PingFederate OAuth 2.0 Token Enforcement policy is configured for the API instances required by the organization. The API instances reside in two business groups (Group A and Group B) within the Master Organization (Master Org). What should be done to allow API consumers to access the API instances?


A. The API administrator should configure the correct client discovery URL in both child business groups, and the API consumer should request access to the API in Ping Identity


B. The API administrator should grant access to the API consumers by creating contracts in the relevant API instances in API Manager


C. The APL consumer should create a client application and request access to the APT in Anypoint Exchange, and the API administrator should approve the request


D. The APT consumer should create a client application and request access to the API in Ping Identity, and the organization's Ping Identity workflow will grant access





C.
  The APL consumer should create a client application and request access to the APT in Anypoint Exchange, and the API administrator should approve the request

What API policy would LEAST likely be applied to a Process API?


A.

Custom circuit breaker


B.

Client ID enforcement


C.

Rate limiting


D.

JSON threat protection





D.
  

JSON threat protection



Explanation: Explanation
Correct Answer: JSON threat protection
*****************************************
Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any
policy can be applied on any layer API. However, context should also be considered
properly before blindly applying the policies on APIs.
That is why, this question asked for a policy that would LEAST likely be applied to a
Process API.
From the given options:
>> All policies except "JSON threat protection" can be applied without hesitation to the
APIs in Process tier.
>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious
JSON payload coming from external API clients. This covers more of a security aspect by
trying to avoid possibly malicious and harmful JSON payloads from external clients calling
experience APIs.
As external API clients are NEVER allowed to call Process APIs directly and also these
kind of malicious and harmful JSON payloads are always stopped at experience API layer
only using this policy, it is LEAST LIKELY that this same policy is again applied on Process
Layer API.

A Rate Limiting policy is applied to an API implementation to protect the back-end system. Recently, there have been surges in demand that cause some API client POST requests to the API implementation to be rejected with policy-related errors, causing delays and complications to the API clients. How should the API policies that are applied to the API implementation be changed to reduce the frequency of errors returned to API clients, while still protecting the back-end system?


A. Keep the Rate Limiting policy and add 9 Client ID Enforcement policy


B. Remove the Rate Limiting policy and add an HTTP Caching policy


C. Remove the Rate Limiting policy and add a Spike Control policy


D. Keep the Rate Limiting policy and add an SLA-based Spike Control policy





D.
  Keep the Rate Limiting policy and add an SLA-based Spike Control policy

Explanation:
When managing high traffic to an API, especially with POST requests, it is crucial to ensure the API’s policies both protect the back-end systems and provide a smooth client experience. Here’s the approach to reducing errors:
Rate Limiting Policy: This policy enforces a limit on the number of requests within a defined time period. However, rate limiting alone may cause clients to hit limits during demand surges, leading to errors.

  • Adding an SLA-based Spike Control Policy:
  • Why Option D is Correct:
  • Explanation of Incorrect Options:

A large company wants to implement IT infrastructure in its own data center, based on the corporate IT policy requirements that data and metadata reside locally. Which combination of Mule control plane and Mule runtime plane(s) meets the requirements?


A. Anypoint Platform Private Cloud Edition for the control plane and the MuleSoft-hosted runtime plane


B. The MuleSoft-hosted control plane and Anypoint Runtime Fabric for the runtime plane


C. The MuleSoft-hosted control plane and customer-hosted Mule runtimes for the runtime plane


D. Anypoint Platform Private Cloud Edition for the control plane and customer-hosted Mule runtimes for the runtime plane





D.
  Anypoint Platform Private Cloud Edition for the control plane and customer-hosted Mule runtimes for the runtime plane

Explanation:

  • Understanding Control and Runtime Planes
  • Evaluating the Options
Conclusion:
Refer to MuleSoft's documentation on Private Cloud Edition deployment and on-premise runtime configurations for further details.

A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?


A.

Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore


B.

Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%


C.

Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers


D.

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%





D.
  

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%



Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.


Page 1 out of 19 Pages