The Line of Business (LoB) of an eCommerce company is requesting a process that sends automated notifications via email every time a new order is processed through the customer's mobile application or through the internal company's web application. In the future, multiple notification channels may be added: for example, text messages and push notifications. What is the most effective API-led connectivity approach for the scenario described above?
A. Create one Experience API for the web application and one for the mobile application.
Create a Process API to orchestrate and retrieve the email template from = database.
Create a System API that sends the email using the Anypoint Connector for Email.
Create one Experience API for the web application and one for the mobile application.
Create a Process API to orchestrate and retrieve the email template from = database.
Create a System API that sends the email using the Anypoint Connector for Email.
B. Create one Experience API for the web application and one for the mobile application
Create a Process API to orchestrate, retrieve the email template from a database, and
send the email using the Anypoint Connector for Email.

C. Create Experience APIs for both the web application and mobile application.
Create a Process API ta orchestrate, retrieve the email template from e database, and
send the email using the Anypoint Connector for Email.
D. Create Experience APIs for both the web application and mobile application.
(Create 3 Process API to orchestrate and retrieve the email template from 2 database.
Create a System API that sends the email using the Anypoint Connector for Email.
Explanation:
In this scenario, the best approach to satisfy the API-led connectivity
principles and support future scalability is:
A REST API is being designed to implement a Mule application.
What standard interface definition language can be used to define REST APIs?
A.
Web Service Definition Language(WSDL)
B.
OpenAPI Specification (OAS)
C.
YAML
D.
AsyncAPI Specification
OpenAPI Specification (OAS)
An application updates an inventory running only one process at any given time to keep the inventory consistent. This process takes 200 milliseconds (.2 seconds) to execute; therefore, the scalability threshold of the application is five requests per second. What is the impact on the application if horizontal scaling is applied, thereby increasing the number of Mule workers?
A. The application scalability threshold is five requests per second regardless of the horizontal scaling
B. The total process execution time is now 100 milliseconds (.1 seconds)
C. The application scalability threshold is now 10 requests per second
D. Horizontal scaling cannot be applied to an already-running application
Explanation:
Given that the application is designed to handle only one process at a time
to maintain data consistency, here’s why horizontal scaling won’t increase the
processing limit:
Single-Process Constraint:
A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?
A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
An eCommerce company is adding a new Product Details feature to their website, A customer will launch the product catalog page, a new Product Details link will appear by product where they can click to retrieve the product detail description. Product detail data is updated with product update releases, once or twice a year, Presently the database response time has been very slow due to high volume. What action retrieves the product details with the lowest response time, fault tolerant, and consistent data?
A. Select the product details from a database in a Cache scope and return them within the API response
B. Select the product details from a database and put them in Anypoint MQ; the Anypoint MO subseriber will receive the product details and return them within the API response
C. Use an object store to store and retrieve the product details originally read from a database and return them within the API response
D. Select the product details from a database and return them within the API response
Which statement is true about Spike Control policy and Rate Limiting policy?
A. All requests are rejected after the limit is reached in Rate Limiting policy, whereas the requests are queued in Spike Control policy after the limit is reached
B. In a clustered environment, the Rate Limiting.and Spike Control policies are applied to each node in the cluster
C. To protect Experience APIs by limiting resource consumption, Rate Limiting policy must be applied
D. In order to apply Rate Limiting and Spike Control policies, a contract to bind client application and API is needed for both
Version 3.0.1 of a REST API implementation represents time values in PST time using ISO 8601 hh:mm:ss format. The API implementation needs to be changed to instead represent time values in CEST time using ISO 8601 hh:mm:ss format. When following the semver.org semantic versioning specification, what version should be assigned to the updated API implementation?
A.
3.0.2
B.
4.0.0
C.
3.1.0
D.
3.0.1
4.0.0
Explanation: Explanation
Correct Answer: 4.0.0
*****************************************
As per semver.org semantic versioning specification:
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes.
- MINOR version when you add functionality in a backwards compatible manner.
- PATCH version when you make backwards compatible bug fixes.
As per the scenario given in the question, the API implementation is completely changing
its behavior. Although the format of the time is still being maintained as hh:mm:ss and there
is no change in schema w.r.t format, the API will start functioning different after this change
as the times are going to come completely different.
Example: Before the change, say, time is going as 09:00:00 representing the PST. Now on,
after the change, the same time will go as 18:00:00 as Central European Summer Time is
9 hours ahead of Pacific Time.
>> This may lead to some uncertain behavior on API clients depending on how they are
handling the times in the API response. All the API clients need to be informed that the API
functionality is going to change and will return in CEST format. So, this considered as a
MAJOR change and the version of API for this new change would be 4.0.0
Which of the below, when used together, makes the IT Operational Model effective?
A.
Create reusable assets, Do marketing on the created assets across organization, Arrange time to time LOB reviews to ensure assets are being consumed or not
B.
Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics
C.
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
Explanation: Explanation
Correct Answer: Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics.
Diagram, arrow
Description automatically generated
| Page 1 out of 19 Pages |