Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 21-Jan-2026



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

Which component monitors APIs and endpoints at scheduled intervals, receives reports about whether tests pass or fail, and displays statistics about API and endpoint performance?


A. API Analytics


B. Anypoint Monitoring dashboards


C. APT Functional Monitoring


D. Anypoint Runtime Manager alerts





C.
  APT Functional Monitoring

Explanation:

  • Understanding API Functional Monitoring:
  • Component Features:
  • Evaluating the Options:
Conclusion:
Refer to MuleSoft documentation on API Functional Monitoring for further guidance on setting up and configuring these tests in Anypoint Platform.

What Mule application can have API policies applied by
Anypoint Platform to the endpoint exposed by that Mule application?
A) A Mule application that accepts requests over HTTP/1.x



A.

Option A


B.

Option B


C.

Option C


D.

Option D





A.
  

Option A



Explanation: Explanation
Correct Answer: Option A
*****************************************
>> Anypoint API Manager and API policies are applicable to all types of HTTP/1.x APIs.
>> They are not applicable to WebSocket APIs, HTTP/2 APIs and gRPC APIs
Reference: https://docs.mulesoft.com/api-manager/2.x/using-policies

A large lending company has developed an API to unlock data from a database server and web server. The API has been deployed to Anypoint Virtual Private Cloud (VPC) on CloudHub 1.0. The database server and web server are in the customer's secure network and are not accessible through the public internet. The database server is in the customer's AWS VPC, whereas the web server is in the customer's on-premises corporate data center. How can access be enabled for the API to connect with the database server and the web server?


A. Set up VPC peering with AWS VPC and a VPN tunnel to the customer's on-premises corporate data center


B. Set up VPC peering with AWS VPC and the customer's on-premises corporate data center


C. Setup a transit gateway to the customer's on-premises corporate data center through AWS VPC


D. Set up VPC peering with the customer's on-premises corporate data center and a VPN tunnel to AWS VPC





A.
  Set up VPC peering with AWS VPC and a VPN tunnel to the customer's on-premises corporate data center

Explanation:

  • Scenario Overview:
  • Connectivity Requirements:
  • Analysis of Options:
Conclusion:
For more detailed reference, MuleSoft documentation on Anypoint VPC peering and VPN connectivity provides additional context on best practices for setting up these connections within a hybrid network infrastructure.

A Mule 4 API has been deployed to CloudHub and a Basic Authentication - Simple policy has been applied to all API methods and resources. However, the API is still accessible by clients without using authentication. How is this possible?


A. The APE Router component is pointing to the incorrect Exchange version of the APT


B. The Autodiscovery element is not present, in the deployed Mule application


C. No… for client applications have been created of this API


D. One of the application’s CloudHub workers restarted





B.
  The Autodiscovery element is not present, in the deployed Mule application

Explanation:
When a Basic Authentication policy is applied to an API on CloudHub but clients can still access the API without authentication, the likely cause is a missing Autodiscovery element. Here’s how this affects API security:

  • Autodiscovery in MuleSoft:
  • Why Option B is Correct:
  • Explanation of Incorrect Options:
References:
Refer to MuleSoft documentation on Autodiscovery configuration and linking API Manager policies for additional information on setting up secure API policies.

A company has created a successful enterprise data model (EDM). The company is
committed to building an application network by adopting modern APIs as a core enabler of
the company's IT operating model. At what API tiers (experience, process, system) should
the company require reusing the EDM when designing modern API data models?


A.

At the experience and process tiers


B.

At the experience and system tiers


C.

At the process and system tiers


D.

At the experience, process, and system tiers





C.
  

At the process and system tiers



Explanation: Explanation Correct Answer: At the process and system tiers
*****************************************
>> Experience Layer APIs are modeled and designed exclusively for the end user's
experience. So, the data models of experience layer vary based on the nature and type of
such API consumer. For example, Mobile consumers will need light-weight data models to
transfer with ease on the wire, where as web-based consumers will need detailed data
models to render most of the info on web pages, so on. So, enterprise data models fit for
the purpose of canonical models but not of good use for experience APIs.
>> That is why, EDMs should be used extensively in process and system tiers but NOT in
experience tier.

A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?


A.

Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore


B.

Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%


C.

Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers


D.

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%





D.
  

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%



Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

In which layer of API-led connectivity, does the business logic orchestration reside?


A.

System Layer


B.

Experience Layer


C.

Process Layer





C.
  

Process Layer



Explanation: Explanation
Correct Answer: Process Layer
*****************************************
>> Experience layer is dedicated for enrichment of end user experience. This layer is to
meet the needs of different API clients/ consumers.
>> System layer is dedicated to APIs which are modular in nature and implement/ expose
various individual functionalities of backend systems
>> Process layer is the place where simple or complex business orchestration logic is
written by invoking one or many System layer modular APIs
So, Process Layer is the right answer.

When designing an upstream API and its implementation, the development team has been
advised to NOT set timeouts when invoking a downstream API, because that downstream
API has no SLA that can be relied upon. This is the only downstream API dependency of
that upstream API.
Assume the downstream API runs uninterrupted without crashing. What is the impact of
this advice?


A.

An SLA for the upstream API CANNOT be provided


B.

The invocation of the downstream API will run to completion without timing out


C.

A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes


D.

A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in
which the downstream API implementation executes





A.
  

An SLA for the upstream API CANNOT be provided



Explanation: Explanation
Correct Answer: An SLA for the upstream API CANNOT be provided.
*****************************************
>> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10
seconds). NOT 500 ms.
>> Mule runtime does NOT apply any such "load-dependent" timeouts. There is no such
behavior currently in Mule.
>> As there is default 10000 ms time out for HTTP connector, we CANNOT always
guarantee that the invocation of the downstream API will run to completion without timing
out due to its unreliable SLA times. If the response time crosses 10 seconds then the
request may time out.
The main impact due to this is that a proper SLA for the upstream API CANNOT be
provided.
Reference: https://docs.mulesoft.com/http-connector/1.5/http-documentation#parameters-3


Page 1 out of 19 Pages