A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
A business process is being implemented within an organization's application network. The architecture group proposes using a more coarse-grained application network design with relatively fewer APIs deployed to the application network compared to a more fine-grained design. Overall, which factor typically increases with a more coarse-grained design for this business process implementation and deployment compared with using a more finegrained design?
A. The complexity of each API implementation
B. The number of discoverable assets related to APIs deployed in the application network
C. The number of possible connections between API implementations in the application network
D. The usage of network infrastructure resources by the application network
An Order API triggers a sequence of other API calls to look up details of an order's items in
a back-end inventory database. The Order API calls the OrderItems process API, which
calls the Inventory system API. The Inventory system API performs database operations in
the back-end inventory database.
The network connection between the Inventory system API and the database is known to
be unreliable and hang at unpredictable times.
Where should a two-second timeout be configured in the API processing sequence so that
the Order API never waits more than two seconds for a response from the Orderltems
process API?

A. In the Orderltems process API implementation
B. In the Order API implementation
C. In the Inventory system API implementation
D. In the inventory database
Say, there is a legacy CRM system called CRM-Z which is offering below functions:
1. Customer creation
2. Amend details of an existing customer
3. Retrieve details of a customer
4. Suspend a customer
A.
Implement a system API named customerManagement which has all the functionalities
wrapped in it as various operations/resources
B.
Implement different system APIs named createCustomer, amendCustomer,
retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
C.
Implement different system APIs named createCustomerInCRMZ,
amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ as
they are modular and has seperation of concerns
Implement different system APIs named createCustomer, amendCustomer,
retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
Correct Answer: Implement different system APIs named createCustomer,
amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has
seperation of concerns
*****************************************
>> It is quite normal to have a single API and different Verb + Resource combinations.
However, this fits well for an Experience API or a Process API but not a best architecture
style for System APIs. So, option with just one customerManagement API is not the best
choice here.
>> The option with APIs in createCustomerInCRMZ format is next close choice w.r.t
modularization and less maintenance but the naming of APIs is directly coupled with the
legacy system. A better foreseen approach would be to name your APIs by abstracting the
backend system names as it allows seamless replacement/migration of any backend
system anytime. So, this is not the correct choice too.
>> createCustomer, amendCustomer, retrieveCustomer and suspendCustomer is the right
approach and is the best fit compared to other options as they are both modular and same
time got the names decoupled from backend system and it has covered all requirements a
System API needs.
An Order API must be designed that contains significant amounts of integration logic and
involves the invocation of the Product API.
The power relationship between Order API and Product API is one of "Customer/Supplier",
because the Product API is used heavily throughout the organization and is developed by a
dedicated development team located in the office of the CTO.
What strategy should be used to deal with the API data model of the Product API within the
Order API?
A.
Convince the development team of the Product API to adopt the API data model of the Order API such that the integration logic of the Order API can work with one consistent internal data model
B.
Work with the API data types of the Product API directly when implementing the integration logic of the Order API such that the Order API uses the same (unchanged) data types as the Product API
C.
Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API
D.
Start an organization-wide data modeling initiative that will result in an Enterprise Data
Model that will then be used in both the Product API and the Order API
Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API
Explanation: Explanation
Correct Answer: Convince the development team of the product API to adopt the API data
model of the Order API such that integration logic of the Order API can work with one
consistent internal data model
*****************************************
Key details to note from the given scenario:
>> Power relationship between Order API and Product API is customer/supplier
So, as per below rules of "Power Relationships", the caller (in this case Order API) would
request for features to the called (Product API team) and the Product API team would need
to accomodate those requests.
What is the main change to the IT operating model that MuleSoft recommends to
organizations to improve innovation and clock speed?
A.
Drive consumption as much as production of assets; this enables developers to discover
and reuse assets from other projects and encourages standardization
B.
Expose assets using a Master Data Management (MDM) system; this standardizes
projects and enables developers to quickly discover and reuse assets from other projects
C.
Implement SOA for reusable APIs to focus on production over consumption; this
standardizes on XML and WSDL formats to speed up decision making
D.
Create a lean and agile organization that makes many small decisions everyday; this
speeds up decision making and enables each line of business to take ownership of its
projects
Drive consumption as much as production of assets; this enables developers to discover
and reuse assets from other projects and encourages standardization
Explanation: Explanation
Correct Answer: Drive consumption as much as production of assets; this enables
developers to discover and reuse assets from other projects and encourages
standardization
*****************************************
>> The main motto of the new IT Operating Model that MuleSoft recommends and made
popular is to change the way that they are delivered from a production model to a
production + consumption model, which is done through an API strategy called API-led
connectivity.
>> The assets built should also be discoverable and self-serveable for reusablity across
LOBs and organization.
>> MuleSoft's IT operating model does not talk about SDLC model (Agile/ Lean etc) or
MDM at all. So, options suggesting these are not valid.
References:
https://blogs.mulesoft.com/biz/connectivity/what-is-a-center-for-enablement-c4e/
https://www.mulesoft.com/resources/api/secret-to-managing-it-projects
A system API is deployed to a primary environment as well as to a disaster recovery (DR)
environment, with different DNS names in each environment. A process API is a client to
the system API and is being rate limited by the system API, with different limits in each of
the environments. The system API's DR environment provides only 20% of the rate limiting
offered by the primary environment. What is the best API fault-tolerant invocation strategy
to reduce overall errors in the process API, given these conditions and constraints?
A.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
B.
Invoke the system API deployed to the primary environment; add retry logic to the process
API to handle intermittent failures by invoking the system API deployed to the DR
environment
C.
In parallel, invoke the system API deployed to the primary environment and the system API
deployed to the DR environment; add timeout and retry logic to the process API to avoid
intermittent failures; add logic to the process API to combine the results
D.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API
deployed to the DR environment
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
Explanation: Explanation
Correct Answer: Invoke the system API deployed to the primary environment; add timeout
and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the
system API deployed to the DR environment
*****************************************
There is one important consideration to be noted in the question which is - System API in
DR environment provides only 20% of the rate limiting offered by the primary environment.
So, comparitively, very less calls will be allowed into the DR environment API opposed to
its primary environment. With this in mind, lets analyse what is the right and best faulttolerant
invocation strategy.
1. Invoking both the system APIs in parallel is definitely NOT a feasible approach because
of the 20% limitation we have on DR environment. Calling in parallel every time would
easily and quickly exhaust the rate limits on DR environment and may not give chance to
genuine intermittent error scenarios to let in during the time of need.
2. Another option given is suggesting to add timeout and retry logic to process API while
invoking primary environment's system API. This is good so far. However, when all retries
failed, the option is suggesting to invoke the copy of process API on DR environment which
is not right or recommended. Only system API is the one to be considered for fallback and
not the whole process API. Process APIs usually have lot of heavy orchestration calling
many other APIs which we do not want to repeat again by calling DR's process API. So this
option is NOT right.
3. One more option given is suggesting to add the retry (no timeout) logic to process API to
directly retry on DR environment's system API instead of retrying the primary environment
system API first. This is not at all a proper fallback. A proper fallback should occur only
after all retries are performed and exhausted on Primary environment first. But here, the
option is suggesting to directly retry fallback API on first failure itself without trying main
API. So, this option is NOT right too.
This leaves us one option which is right and best fit.
- Invoke the system API deployed to the primary environment
- Add Timeout and Retry logic on it in process API
- If it fails even after all retries, then invoke the system API deployed to the DR
environment.
A customer wants to host their MuleSoft applications in CloudHub 1.0, and these
applications should be available at the domain https://api.acmecorp.com.
After creating a dedicated load balancer (DLB) called acme-dib-prod, which further action
must the customer take to complete the configuration?
A. Configure the DLB with a TLS certificate for api.acmecorp.com and create an A record for api.acmecorp.com to the public IP addresses associated with their DLB
B. Configure the DLB with a TLS certificate for api.acmecorp.com and create a CNAME record from api.acmecorp.com to acme-dib-prod.|lb.anypointdns.net
C. Configure the DLB with a TLS certificate for acme-dib-prod.Jb.anypointdns.net and create a CNAME record from api.acmecorp:com to acme-dlb-prod.lb.anypointdns.net
D. Configure the DLB with a TLS certificate for aplacmecorp.com and create a CNAME record from api.aomecorp.com to acme-dib-prod.ei.cloubhub.io
Explanation:
When setting up a custom domain for MuleSoft applications hosted on
CloudHub 1.0 using a Dedicated Load Balancer (DLB), follow these steps:
Set Up the TLS Certificate: Configure the DLB (acme-dib-prod) with a TLS
certificate that covers the custom domain api.acmecorp.com. This certificate will
allow HTTPS traffic to be securely directed through the DLB to your Mule
applications.
| Page 1 out of 19 Pages |