The #1 site to find MENA Phone Number Database and accurate B2B & B2C Phone Number Database. Emailproleads.com provides verified contact information for people in your target industry. It has never been easier to purchase an Contact list with good information that will allow you to make real connections. These databases will help you make more sales and target your audience. You can buy pre-made mailing lists or build your marketing strategy with our online list-builder tool. Find new business contacts online today!

Just $199.00 for the entire Lists

Customize your database with data segmentation

Phone Database List

Free samples of MENA mobile number database

We provide free samples of our ready to use MENA contact Lists. Download the samples to verify the data before you make the purchase.

Phone Contact Lists
Contact Lists

Human Verified MENA Mobile Number Lists

The data is subject to a seven-tier verification process, including artificial intelligence, manual quality control, and an opt-in process.

Best MENA contact number lists

Highlights of our MENA Contact Lists

First Name
Last Name
Phone Number
Address
City
State
County
Zip
Age
Income
Home Owner
Married
Property

Networth
Household
Cradit Rating
Dwelling Type
Political
Donor
Ethnicity
Language Spoken
Email
Latitude
Longitude
Timezone
Presence of children
Gender

DOB
Birth Date Occupation
Presence Of Credit Card
Investment Stock Securities
Investments Real Estate
Investing Finance Grouping
Investments Foreign
Investment Estimated
Residential Properties Owned
Traveler
Pets
Cats
Dogs
Health

Institution Contributor
Donates by Mail
Veteranin Household
Heavy Business
Travelers
High Tech Leader
Smoker
Mail Order Buyer
Online Purchasing Indicator
Environmental Issues Charitable Donation
International Aid Charitable Donation
Home Swimming Pool

Look at what our customers want to share

FAQ

Our email list is divided into three categories: regions, industries and job functions. Regional email can help businesses target consumers or businesses in specific areas. MENA Email Lists broken down by industry help optimize your advertising efforts. If you’re marketing to a niche buyer, then our email lists filtered by job function can be incredibly helpful.

Ethically-sourced and robust database of over 1 Billion+ unique email addresses

Our B2B and B2C data list covers over 100+ countries including APAC and EMEA with most sought after industries including Automotive, Banking & Financial services, Manufacturing, Technology, Telecommunications.

In general, once we’ve received your request for data, it takes 24 hours to first compile your specific data and you’ll receive the data within 24 hours of your initial order.

Our data standards are extremely high. We pride ourselves on providing 97% accurate MENA telephone number database, and we’ll provide you with replacement data for all information that doesn’t meet your standards our expectations.

We pride ourselves on providing customers with high quality data. Our MENA Email Database and mailing lists are updated semi-annually conforming to all requirements set by the Direct Marketing Association and comply with CAN-SPAM.

MENA cellular phone number list

Emailproleads provides Mobile Database to individuals or organizations for the sole purpose of promoting your business. In Digital Marketing. The mobile number database of Emailproleads helps to reach the highest level of business conversations.

Mobile number databases are a crucial marketing tool with many numbers from all over the globe. Since the arrival of smartphones, there has been an exponential rise in the number of buyers because technology has changed the way of marketing. Mobile number databases are essential for every retailer today in marketing and selling their goods and services. The world is now filled with mobiles that have internet connectivity across the globe.

MENA contact number lists

Now and again, we can see advertisements promoting the company. These ads result in the expansion of the company. It is possible to expand your marketing further using other services for Digital Marketing like Bulk SMS, Voice Calls, WhatsApp Marketing, etc.

Emailproleads checks every mobile number in the database using various strategies and techniques to ensure that buyers receive the most appropriate and relevant customer number and successfully meet their marketing goals and objectives.

This service assists you find your loyal customers keen to purchase your product. If you’d like to see your brand acknowledged by customers, using a database of mobile numbers is among the most effective ways to accomplish this.

What is the meaning of Phone Number Data?

A telephone number is a specific number that telecommunication firms assign to their customers, thus permitting them to communicate via an upgraded method of routing destination codes. Telecom companies give whole numbers within the limits of regional or national telephone numbering plans. With more than five billion users of mobile phones around the world, phone number information is now a gold mine for government and business operations.

What is the method of collecting the phone Number Data collected?

Having the number of current and potential customers and marketing professionals opens up a wealth of opportunities for lead generation and CRM. The presence of customer numbers is an excellent way to boost marketing campaigns as it allows marketers to interact with their target audience via rich multimedia and mobile messaging. Therefore, gathering phone number information is vital to any modern-day marketing strategy. The strategies consumers can use to collect data from phone numbers include:

* Adding contact forms on websites.
* Requests to be made for phone calls from customers.
* Use mobile keyword phrases for promotions to encourage prospective customers to contact you.
* Applying app updates prompts users to change their email details each time they sign in.
* Acquiring phone numbers that are already available information from third-party service companies with the information.

What are the main characteristics of the Phone Number Data?

One of the critical advantages of phone number data is that it is created to reveal the geographic location of mobile users because phone numbers contain particular strings specific to a region or country that show the user’s precise position. This is useful in targeted campaigns, mainly where marketers target a specific area that can target their marketing efforts.

To prevent duplicates and improve accessibility, the phone number information is typically stored in the E164 international format, which defines the essential characteristics of a recorded phone number. The specifications that are followed in this format are the number code for the country (CC) and an NDC, a country code (CC), a national destination code (NDC), and the subscriber number (SN).

What do you think of the phone Number Data used for?

The possibilities that can be made possible by the phone number information are endless. The availability of a phone number database means that companies worldwide can market their products directly to prospective customers without using third-party companies.

Because phone numbers are region – and country-specific and country-specific, data from phone numbers gives marketers a comprehensive view of the scope of marketing campaigns, which helps them decide on the best areas they should focus their time and resources on. Also, governments use the data from mobile numbers to study people’s mobility, geographic subdivisions, urban planning, help with development plans, and security concerns such as KYC.

How can an individual determine the validity of Phone Number Data?

In determining the quality of the phone number information, users should be aware of the fundamental quality aspects of analysis. These are:
Completeness. All info about phone numbers within the database must be correct.
Accuracy. This measure reflects how well the data identifies the individual described within the actual world.
Consistency. This indicates how well the data provider follows the rules to facilitate data retrieval.
Accessibility. The phone number database should be accessible where the data is organized to allow easy navigation and immediate commercial use.

Where can I purchase Phone Number Data?

The Data Providers and Vendors listed in Datarade provide Phone Number Data products and examples. Most popular products for Phone Number Data and data sets available on our platform include China B2B phone number – Chinese businesses by Octobot, IPQS Phone Number Validation and Reputation through IPQualityScore (IPQS), and B2B Contact Direct Dial/Cell Phone Number Direct Dial and mobile numbers for cold calling Real-time verified contact email and Phone Number by Lead for business.

Mena phone Number Database

You can find phone number data from Emailproleads.

What are data types similar that are similar to Phone Number Data?

Telephone Number Data is comparable with Address Data; Email Address Data, MAID Hashed Email Data, Identification Linkage Data, and Household-Level Identity Data. These categories of data are typically employed to aid in Identity Resolution and Data Onboarding.

Which are your most popular uses for Phone Number Data?

The top uses that involve Phone Number Data are Identity Resolution, Data Onboarding, and Direct Marketing.

Let’s say you’re running a business selling strategy that demands you to connect with the maximum number of people you can. If your job is laid off for you, it can often be challenging to determine what to do. First, you should create your list of prospective customers and then save your call data in an electronic database.

MENA Telephone Number Lists

Though you might believe that working with lists of telephone numbers and storing them in databases is all you need to launch a cold calling campaign, it’s not the case. Since a telephone number database could contain thousands or millions of leads, along with important data points about each potential customer, It is essential to adhere to the best practices for a Database of telephone numbers. Methods to avoid becoming overwhelmed or losing important data.

To build a phone number database that delivers outcomes, you must start on the right starting point. It is possible to do this by purchasing lists of sales leads from a reliable, dependable company like ours. It’s equally important to have the right tools to allow your team to contact the most people possible.

In addition to high-quality telephone marketing lists, we provide advice on the best techniques for targeting databases and dialer software that can make lead generation more efficient and less expensive over time. Our customer service representatives are ready to assist you.

MENA Telephone Number Database Best Practices

After you’ve established the basis for success by acquiring high-quality lead lists and implementing dialers that can boost how many calls your team receives by up to 400 percent, you’re ready to become familiar with best practices for your industry. By adhering to a list of phones and best database practices, you’ll dramatically improve the odds that your team will succeed in the short and long term.

MENA cell phone number list

Here are the best techniques for telemarketing databases that you should consider a priority to observe.

Get Organized
A well-organized MENA mobile phone directory includes contacts organized according to phone country, postal, area, city, and province. By narrowing your calls to only one of the criteria, it is possible to incorporate new business information into your list, then sort and retarget top leads.

MENA mobile number list

Create a strategy to manage your phone lists. Naturally, your organizational plan must be based on the purpose of your cold-calling campaign. Your business’s goals will affect the traits your most promising prospects have. Make a profile of the most appealing candidate based on the plans for your marketing campaign. Make sure you make your leads list to ensure that the candidates who best meet your ideal profile of a prospect are first on your list of leads. List.

MENA cellular phone number list

Determine Who Has Access to and edit your database
Your phone number list doesn’t only represent an investment in money but also a resource that your team can use to increase sales. Although your phone number list is essential because you bought it, it’s also advantageous due to the possibility that it can improve your bottom line. In this regard, you should think carefully about who has access to and control your database.

It is generally recommended to restrict the number of users who have access to your database to only those who use it to communicate with potential customers to achieve your campaign’s goals. If an individual is not active with your marketing campaign, then there’s no reason for them to gain access to your telephone number database.

It’s also advisable to restrict access to the database you have created; it’s best to allow editing privileges to people who require them. This generally means that you only give editing rights to agents that will be conducting cold calls. It will be necessary to modify the database to make changes to records and notes that could aid in subsequent calls.

MENA phone number database

Create Your Database
Databases are knowledge centers that store information for sales personnel. They are vital to gain knowledge and share it with your sales staff. Even if it’s just to keep call notes, callback databases can help your sales team to achieve maximum value and benefit from lists of telemarketing calls.

As time passes, your phone number list will likely expand and include more contact numbers and information on your customers. When you get recommendations from your current prospects or purchase leads lists, or either, it’s essential to grow the size of your database to include as much data as you can to assist you in achieving your goals for the business in the near and far future and at every step in between.

4. Keep Your Database
Although you want your database to expand with time, you do not want it to contain obsolete or ineffective details. To keep your database from overloading with useless information, it’s essential to maintain it regularly, including removing old records and updating your prospective customers with their contact details.

One of the most effective ways to ensure your database is to ensure that it doesn’t contain numbers listed on the Do Not Call list. If you make a call to an address that is listed on a Do Not List, you could result in your business spending lots of money, perhaps even millions. With the free tools available online, think about scrubbing all your data against the Do Not Call registry at least twice yearly.

If you’ve learned the basics of a telephone list and best practices for database management, you can contact

MENA mobile number database

Emailproleads.com now to receive the top-quality leads lists you need within your database. MENA phone number database free download

Today, download the mobile phone/cell numbers directory of all cities and states based on the network or operator. The database of mobile numbers is an excellent resource for advertising and bulk SMS, targeting specific regions of people, electoral campaigns, or other campaigns. Before you use these numbers, verify the ” Do Not Disturb” status in conjunction with TRAI. If it is activated, it is not permitted to use these numbers to promote your business.

Buy MENA Phone Number Database

It’s the quickest method of building an extensive list of phone numbers for your potential customers. Pay a fixed sum (per list, contact, country, or industry) and get every mobile number you paid for and have in your possession. You can then utilize them several times to reach out to customers to convince them to purchase their products or products. Doesn’t that sound great?

MENA phone number listing

Although it may seem like the fastest method of building a list of numbers, it’s not the case. There are a lot of risks associated with purchasing mobile marketing lists which won’t generate sales:

They’re not well-targeted. It’s impossible to be sure that every person on the bought phone lists will pay attention to the emails you’ve sent or your company worldwide.

MENA contact number lists

It will help if you trust someone completely. When you purchase a mobile phone list, you’ll need to be able to trust your seller about how active the numbers are. It’s possible that the majority of the phone numbers you’re buying are not current or relevant.

The third quality aspect is visibility. It is important to note that the FTGO group had implemented monitoring as well as logging in the current application. However, a microservice structure is one that is distributed platform, which presents other challenges. Each request is handled through the API gateway and at a minimum one service. For instance, imagine that you’re trying to figure out which of the six services are the cause of a delay. Imagine trying to figure out the way a request is handled when log entries are spread over 5 different service. To help you comprehend the behaviour of your application, and also to identify problems, you need to use observability patterns in a variety of ways.

This chapter begins by explaining how to implement security within the microservices architecture. In the next chapter, I explain the best ways to design services that can be configured. I will cover a couple of various mechanisms to configure services. In the next section, I will discuss how you can make your services simpler to comprehend and troubleshoot using patterns of observability. I conclude the chapter by explaining how you can make the process of implementing these and other problems by building your service built on microservice framework.

The development of secure services

Cybersecurity is now a major concern for every company. Every day, there are stories about hacker-related incidents that have stolen the company’s information. To create secure software and stay clear of the news the company must deal with a variety of security issues. These include physical security of hardware, the encryption of data during transit and in rest, authorization and authentication as well as pol-icies to patch software security vulnerabilities. The majority of these issues have similar regardless of whether you’re working with the monolithic or microservices architecture. This article focuses on how the microservices architecture can impact security on the application level.

A developer of applications is principally accountable for the implementation of four distinct elements of security

Authentication–Verifying the identity of the application or human (a.k.a. the primary) that is trying to access the application. For instance, an application usually validates the credentials of the principal like an account ID and password, or the API key of an application and secret.

Authorization–Verifying that the principal is allowed to perform the requested operation on the specified data. Most applications make use of a combination of access control lists (ACLs). Role-based security gives every user either one of the roles which give users the ability to execute specific actions. ACLs give users or roles the ability to execute operations on specific business object, or an aggregate.

Auditing–Recording the activities that the principal is performing in order to spot security concerns and assist customer support and ensure conformity.

Secure interprocess communication–Ideally, all communication in and out of ser-vices should be over Transport Layer Security (TLS). Interservice communications may need to be authenticated.

I discuss auditing in depth within section 11.3 and discuss security for inter-service communication when I discuss meshes of service within section 11.4.1. This section is focused on the implementation of the authentication process and authorization.

I start by describing the way security is implemented within the FTGO monolith application. I then discuss the difficulties when implementing security in microservice architecture, and explain why methods that work in a monolithic framework are not suitable for microservice architecture. Then I discuss how to implement security within the microservices architecture.

Let’s begin by looking at the way in which this monolithic FTGO program handles security.

A look at security in a monolithic traditional application

The FTGO application supports a range of human users, such as consumers, customers, and restaurant employees. Users access the application via mobile and browser-based applications. All FTGO users need to sign in to access the application. Figure 11.1 illustrates how clients of the single-sided FTGO application authenticate and send requests.

Once a user has logged into their account using their username with a password and user ID, the user creates an POST request that contains the user’s credentials to FTGO application. The FTGO application validates the credentials and provides an account token for the user. The client is able to include the session token in every next request made to the FTGO application.

Utilizing an appropriate security framework

The process of implementing authentication and authorization properly is a challenge. It is best to choose a well-established security framework. Which framework you should choose depends on your application’s technology stack. A few frameworks that are popular include:

Spring Security
A well-known frame-work framework for Java applications. It’s an advanced framework that handles authorization and authentication.

Apache Shiro

The primary component of the security architecture is the session that stores the principal’s identification and the roles. This FTGO program is classic Java EE application, so the session is an in-memory HttpSession session. A session is identified through an identification token for the session, which clients include in every request. It’s typically an opaque token, such as an cryptographi-cally secure random number. The FTGO session token used by the application comprises an HTTP cookie known as JSESSIONID.

Another important aspect that is part of security’s implementation are the context that stores the information of the person who is making the request. It is a part of the Spring Security framework uses the standard Java EE approach of storing the security context as an unchanging, thread-local variable that is easily accessible to any program that is called to help handle the request. A request handler can call SecurityContextHolder.getContext()
.getAuthentication() in order to get details about the user currently in use including their identity and role. However the Passport framework keeps the security context in the attribute for the user in the request.
In the sequence illustrated in the figure 11.2 is as like this:

The client sends an registration request with the FTGO application.

Login requests are handled through LoginHandler that checks credentials, creates the session and saves details about the person who initiated the session.

Login Handler gives an account token to the client.

The client will include the session token in all requests that call operations.

These requests are first processed by SessionBasedSecurityInterceptor. The interceptor authenticates every request by confirming the session token. It then creates the security context. The security context outlines the role of the principal as well as its responsibilities.

www.EBooksWorld.ir
Making secure services 353

A request handler utilizes the context of security to decide whether it is appropriate to permit users to complete the requested action and verify their identity.

The FTGO application is based on the concept of role-based authorization. It defines a variety of roles that correspond to the various types of users, such as COURIER, CONSUMER, RESTAURANT and ADMIN. It utilizes the declarative security method for limiting access to URLs and other service methods to certain roles. Roles are also incorporated into the logic of business. For instance, a customer is able to only view their purchases while an administrator can access all orders.

The security strategy employed in the single-sided FTGO application is just one of the ways for implementing security. One disadvantage of using an in-memory based session has to do with the fact that that all requests made for the same session to be sent to the exact application instance. This can cause problems with load balancing and operation. For instance, you must install an automatic session draining system which will wait for the expiration of all sessions before shutting down an instance. Another option, which can solve these issues is to record the session data in databases.

Sometimes, you can get rid of the server-side session altogether. In particular, many applications use API clients that are able to provide credentials, like an API secret and key, with every request. This means that there is no need to keep an on-server session. In addition, the application could keep session data by using a session token. In the next section, I will describe a method of using the session token to store the session’s state. However, let’s look at the issues involved in creating security within the microservices architecture.

11.1.2 Implementing security within a microservices architecture

Microservices are an open and distributed architecture. Every request from outside is processed via the API gateway and at a minimum one service. Take, for instance, an order-details() request, which is discussed in chapter 8. The API gateway responds to this query through a variety of services, such as Order Service, Kitchen Service as well as Accounting Service. Each service must incorporate certain features of security. For example, Order Service must only permit customers to view their order, which requires a combination of authorization and authentication. To implement security within a micros-ervice structure, it is necessary to identify who is accountable to authenticate the user and who is accountable for authorization.

One issue that comes with implementing security in microservices is that it’s not possible to duplicate the design of an application that is monolithic. It’s because two components of the security architecture used in monolithic applications are not suitable in a microservices architecture:

In-memory security context – Using an in-memory security context like a thread local, to transfer user identities. Services aren’t able to share memory, therefore they aren’t able to make use of an in-memory security context like a thread-local to transfer user’s identity. In a microservice-based architecture, we require an alternative method of passing user identities between services.

Centralized session – Because an in-memory security context does not have any logic, nor does an in-memory-session. In theory, several applications could connect to the database-based session, with the exception that it violates the notion of loose communication. It is necessary to have a distinct session mechanism for microservices in an architecture.

Let’s start our investigation of security within microservices by exploring how to manage authentication.

AUTHENTICATION OF HANDLING IN THE API GATEWAY

There are many various ways to manage authentication. One possibility is for specific services that authenticate users. The problem with this option is that it allows unauthenticated requests to access the network. It depends on every developer implement security across all their applications. In the end, there’s a substantial chance that an application could be vulnerable to security flaws.

Another issue when implementing authentication into the API is that different clients are authenticated in different ways. Pure API clients provide credentials with each request by using the example of basic authentication. Others clients may first sign into their account and then issue an account token on every request. We do not want to require services to manage a wide range of authentication methods.

The best approach is to allow API gateways API gateway to verify requests prior to forwarding them to the API services. Centralizing API authentication through the API gateway is advantageous in that there’s only one spot to ensure the right authentication. This means that there’s a lower risk of security vulnerabilities. Another advantage is the fact that just the API gateway is required to manage the different authentication methods. This complexity is hidden from the API services.

Figure 11.3 illustrates how this strategy functions. Clients authenticate using an API gate-way. API clients add credentials to every request. Login-based clients post the credentials of the user into the gateway’s API authentication service and receive an authorization token for the session. After an API gateway is authenticated the request, it will invoke one or more of the services.

Pattern Access token

The API gateway sends an ID token that contains details concerning the individual user including their identity as well as their role and roles to the API services it calls.

A service called via the API gateway must know the identity of the person who made the request. It also needs to verify that the request is authenticated. The best solution is for API gateways to API gateway to add an authentication token with every service request. The service utilizes the token to validate the request and gather information regarding the main. The API gateway may offer the token to clients who are oriented towards session for use as the session token.

Sequence of Events that are available for API clients follows:

A customer sends an inquiry with credentials.

API gateway API gateway authenticates credentials and then creates an encryption token and transmits it to the service.

Login-related clients’ sequence follows as the following:

A user makes an account login request with credentials.

The API gateway will return the security token.

The client will include the security token in all requests which invoke operations.

The API gateway authenticates the security token, before forwarding it on to the appropriate service service.

In the next chapter, I’ll explain the process of implementing tokens, however, let’s first consider the second aspect of security that is authorization.

HANDLING AUTHORIZATION

The authentication of credentials for a client is crucial, but not sufficient. A program must also incorporate an authorization mechanism that ensures that the client is authorized to carry out the operation. For instance within the FTGO application, the obtainOrderDetails() query is only able to be used by the person who made the purchase (an example of security based on an instance) as well as a customer service agent who assists the customer.

The best place to implement authorization is via the API gateway. It could, for instance restrict access to GET orders/orderId only those who are customers and customer service agents. If a user isn’t permitted to access a specific path the API gateway may deny the request and then forward it to the provider. Similar to authentication centralizing authorization in the API gateway decreases the chance of security vulnerabilities. You can enable authorization in the API gateway with security frames-work for example, Spring Security.

One disadvantage of the implementation of authorization within API gateway API gateway is the risk of coupling the API gateway to the service which require the updates to occur in lockstep. Additionally the API gateway will typically only grant roles-based access to URL paths. It’s usually not feasible to expect an API gateway to implement ACLs to control access to specific domain objects since it requires a thorough knowledge of the domain functionality.

Another location to implement authorization is within the service. The service can implement the role-based authentication for URLs and for methods used by service providers. It is also able to implement ACLs to control the access of aggregates. Order Service can, for instance, implement a ACL-based, role-based authorization system to manage accessibility to order. Other services within the FTGO application also use similar authorization rules.

Utilizing JWTS to verify user IDENTITY and ROLE

When you are implementing security into microservice architectures you must decide which kind of token your API gateway will utilize to transmit user data to the API services. There are two kinds of tokens you can select from. Another option is to choose opaque tokens, that are generally UUIDs. The drawback to opaque tokens are that they decrease the availability and performance of the token and also increase the time to complete. The reason for this is that the user of such tokens must perform a synchronous RPC connection to a secure provider to verify the token and retrieve the information of the user.

Another alternative, which removes the need for the security service is to use a clear token that contains details concerning the individual. One popular standard used for tokens that are transparent is JSON Web Token (JWT). JWT is a standard method to securely convey claims, like identities and roles of the user among two people. JWTs are used to represent claims between two parties. JWT includes a payload which is an JSON object that has details about the user like their identity and roles, and also other metadata like the expiration date. The JWT is signed using an encryption method that is only available only to the author of the JWT and its recipients, like the API gateway as well as the user who is receiving the JWT for example, the service. The secret makes sure that any malicious third party cannot forge or alter the JWT.

One problem one issue JWT is that, since the token is self-contained, it is irrevocable. The way it is designed, a service can only perform the request following verification of the signature of the JWT and its expiration date. This means that there is no way to remove a JWT that’s been slipped to the hands of a malicious third-party. It is possible to issue JWTs that have expiration dates that are short since this restricts the amount of damage a malicious person can accomplish. The drawback with short-lived JWTs however is that the program must constantly reissue JWTs to keep the session running. It is a good thing that this is only one of numerous protocols that can be solved with a security standard called OAuth 2.0. Let’s examine how that operates.

Utilizing OAUTH 2.0 In A Micro-Service Architecture

Let’s suppose you decide to develop the user Service to the FTGO application to manage the database of users that contains information such as the user’s credentials and roles. The API gateway makes calls to it the User Service to authenticate a client’s request, and to receive an JWT. You can develop an API for a User Service API and implement it with your preferred web framework. However, that’s a generic feature that’s not specific for the FTGO application. The development of this kind of service would not be an efficient usage for development resource.

It’s not necessary to build this type system of infrastructure for security. It is possible to use an off-the-shelf solution or framework that uses a standard known as OAuth 2.0. OAuth 2.0 is an authorization protocol initially designed to allow users of a public cloud-based service, such as GitHub or Google to give an application from a third party access to its data without divulging their password. For instance OAuth 2.0 is the method by which you lets you securely grant a third-party cloud-based Continuous Integration (CI) service access to your GitHub repository.

While the primary goal for OAuth 2.0 was to allow access to cloud services that are accessible to the public however, you can also utilize it to authorize and authenticate within your application. Let’s have a review of how an microservices-based architecture could use OAuth 2.0.

About OAuth 2.0

OAuth 2.0 is a difficult issue. This chapter I will only give an overview of the topic and explain how it could be utilized within a microservices-based architecture. For more details on OAuth 2.0 take a look at the book online OAuth 2.0 Servers, written by Aaron Parecki (www.oauth.com). Chapter 7 of Spring Microservices in Action (Manning, 2017) also covers this topic (https://livebook.manning.com/#!/book/spring-microservices-in-action/chapter-7/).

The fundamental concepts in OAuth 2.0 are as follows:

Authorization Server provides an API to authenticate users and issuing an access token as well as refresh token. Spring OAuth is a fantastic example of a framework to use for creating the OAuth 2.0 Authorization Server.

Access Token – A token that allows access to the Resource Server. The structure of access tokens is dependent on the implementation. However, some implementations, like Spring OAuth, use JWTs.

Refresh Token – A long-lived, yet non-revocable token which a Client utilizes to get a new AccessToken.

Resource Server is a service that utilizes access tokens to grant access. In a microservices-based structure, the services are resource servers.

Client is a user who wants access to the API Gateway Resource Server. In a microservices architecture, API Gateway is the OAuth 2.0 client.

In this section, I will explain the methods to support login-based clients that require logins. First we’ll discuss the process of authenticating API clients.

Figure 11.4 illustrates what happens when the API gateway authenticates an request from the API client. The API gateway authenticates the API client through an OAuth 2.0 authorization server, which will return an access code. The API gateway makes an additional request with this access token and connects to service.
In the sequence depicted in Figure 11.4 is as the following:

The client sends an inquiry, providing the credentials required using basic authentication.

The API gateway makes an OAuth 2.0 Password Grant request (www.oauth.com/ oauth2-servers/access-tokens/password-grant/) to the OAuth 2.0 authentication server.

The authentication server checks that the API client’s credentials, and then returns an access token as well as refresh token.

The API gateway uses access tokens in requests it sends to services. A service checks the access token and then uses it to approve the request.

A OAuth 2.0-based API gateway is able to authenticate clients that are session-oriented through the use of the OAuth 2.0 access token to act as the session token. Furthermore once an access token is expired it is able to get a fresh access token by using it as a refresh token. Figure 11.5 illustrates the way one API gateway can make use of OAuth 2.0 to manage clients that are session-oriented. A client that is an API cli-ent starts sessions by sending its login credentials through the gateway’s /login access point. The API gateway sends an access token as well as refresh token for the user. The API client will then provide both tokens whenever it sends request to API gateway.

This sequence runs as the following:

The client that logs in sends its credentials to the API gateway. API gateway.

The API gateway’s Login Handler makes an OAuth 2.0 Password Grant request (www.oauth.com/oauth2-servers/access-tokens/password-grant/) to the OAuth 2.0 authentication server.

The authentication server verifies the credentials of the client, and then returns an access token as well as an update token.

The API gateway provides access and refresh tokens back to the client in the form of cookies, for example.

The client is able to include accessibility and refresh tokens when it makes requests it sends for the gateway API.

The gateway’s API Session Authentication Interceptor validates the access token, and then includes it in the requests it sends to the API gateway.

If the access token has expired or is about to expire, the API gateway obtains a new access token by making an OAuth 2.0 Refresh Grant request (www.oauth.com/ oauth2-servers/access-tokens/refreshing-access-tokens/), which contains the refresh token, to the authorization server. When the refresh certificate isn’t expired or been removed the authorization server will return an access token that is new. API Gateway passes the new access token to the service and returns it to the customer.

The main benefit that comes with using OAuth 2.0 is that it’s a reputable security standard. The use of an off-the-shelf version of the OAuth 2.0 authentication server means that you won’t need to spend time re-inventing the wheel or risk creating a sloppy design. However, OAuth 2.0 isn’t the only method of implementing security in the microservices architecture. No matter which method you decide to use, three fundamental concepts are:

It is the API gateway is the one responsible for authenticating customers.

The API gateway and services employ an opaque token, like JWT, to transmit details about the primary.

A service utilizes the token to determine the identity and role of the principal.

After we’ve examined ways to secure services Let’s look at ways to make them customizable.

Designing configurable services

Imagine you’re in charge of the Order History Service. As the figure 11.6 illustrates that the service consumes data that originate from Apache Kafka and writes and reads AWS DynamoDB table items. To enable the service to function it must have a number of settings, such as the location on the network of Apache Kafka and the credentials as well as the location of the network to AWS DynamoDB.

The value of these properties in the configuration are dependent on the environment that the service is operating in. For example, the development and production environments utilize various Apache Kafka brokers and different AWS credentials. It’s not sensible to connect a specific system’s properties in the configuration of the deployable service, since it will require that it be rebuilt for each specific environment. Instead, a service must be constructed once by the deployment pipeline, and then placed into multiple environments.

Also, it’s not sensible to wire different kinds of configuration properties in the source code, and then use for instance the profile mechanism of Spring Framework to select the correct setting at the time of execution. This is due to the fact that doing so could expose a vulnerability to secu-rity and limit the areas where it can be used. Additionally, sensitive data such as credentials should be stored securely using a secrets storage mechanism, such as Hashicorp Vault (www.vaultproject.io) or AWS Parameter Store

Instead, you should provide the correct configuration properties to the service during runtime through this Externalized design pattern for configuration.

Pattern Externalized configuration

Externalized configuration mechanisms provide the values for configuration properties to the service instance at the time of execution. There are two primary methods:

Push model: The deployment infrastructure transmits the settings on to the instance through such things as operating system environment variable-ables or an configuration file.
Pull model – The service instance retrieves its configuration parameters from an con-figuration server.

We’ll examine each method beginning from the “push” model.
Utilizing external configurations that are push-based

The push model is based on the collaboration between both the environment for deployment and service. The deployment environment is responsible for the settings properties whenever it creates a service. As the figure 11.7 illustrates, pass the configuration properties as environment variables. Or the deployment environment could use the configuration properties to create an configuration file. The service instance reads the properties of the configuration when it begins to run.
The environment for deployment and the service have to be in agreement on the way that properties for configuration are delivered. The exact mechanism is contingent on the particular deployment environment. For instance chapter 12 explains how you can define the environment variables that are required by the Docker container.

Let’s say you’ve made the decision to provide externalized values for configuration properties with the help of environment variables. The application you’re using could use System.getenv() for you to find the values of these variables. However, if you’re an Java developer most likely, you’re using a framework that offers an easier method of doing this. The FTGO services are built using Spring Boot, which has an extremely flexible externalized configuration mechanism that retrieves configuration properties from a variety of sources with well-defined pre-cedence rules (https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html). Let’s take a look at the way it works.

Spring Boot reads properties from many sources. I have found the following sources helpful in a microservices architecture:
Command-line arguments

SPRING_APPLICATION_JSON, an operating system environment variable or JVM system property that contains JSON

JVM System properties

Operating system variables for environment

A configuration file is located in the directory currently in use

A specific property value from a source earlier on this list will override the exact prop-erty value from a source later on in this list. For instance the operating system’s environment variables can override properties taken from an configuration file.

Spring Boot makes these properties accessible for users of the ApplicationContext of the Spring Framework. ApplicationContext. A service may take, for instance, an object’s value by using the annotation @Value:

public class OrderHistoryDynamoDBConfiguration {

@Value(“$”)

private String awsRegion

Spring Framework Spring Framework initializes the awsRegion field with the values of the aws.region property. This property is obtained by one of the resources mentioned earlier, like an config-uration or the AWS_REGION variable in the environment.

It is a successful and widely-used method for reconfiguring a server. The only drawback is that reconfiguring an operating service can be challenging, but not impossible. The deployment infrastructure may not permit you to alter the external configuration of a running process without re-starting it. For instance, you can’t modify the environment variables for a running process. Another drawback is the risk of the property values used for configuration being scattered across the definition of many services. This means that you might want to think about using a pull-based approach. Let’s take a look at the way it is done.

Its Spring Cloud Config project is an excellent example of a server-based configuration framework. It is comprised of an application server and client. The server is able to support a range of backends that store the configuration data, such as databases, version control systems along with Hashicorp Vault. The client pulls configuration property from the server and then injects those properties into the Spring ApplicationContext.

The use of a configuration server comes with many advantages:

Centralized configuration: All the properties of the configuration are kept all in one place and are therefore easier to manage. Furthermore, to remove duplicate configuration properties, certain applications allow you to set general defaults, which can be changed on an individual basis.

Secure encryption of sensitive data – Encrypting sensitive data like database credentials is a best method. The issue with encryption is that the typical service application must decrypt the data, which is why it needs encryption keys. Certain configuration server implementations automatically decrypt properties prior to giving them back to service.
Dynamic reconfiguration – A service might detect changes in property values by for instance, polling, and then reconfigure the service itself.

The biggest drawback of using a server for configuration is that, unless provided from the existing infrastructure it’s an additional piece of infrastructure that must be established and maintained. There are many open source frameworks like Spring Cloud Config, that allow for easier running an infrastructure server for configuration.

We’ve now looked at the ways to design configurable services Let’s discuss how to create observable services.

The design of observable services

Let’s say that you’ve put the FTGO application to production. You’ll probably would like to know how the application is performing: the number of requests it receives per second (RPS), resource usage and so on. Also, you should be aware of a issue, like the service instance failing or a disk that is filling up before it affects the user. If there’s any issue, you’ll need to be able to solve the issue and determine the root of the issue.

The management of production of an application are beyond the control of the developer, for instance monitoring the availability of hardware and its utilization. These are the main responsibility of the operations. However, there are a few ways that you as a ser-vice developers need to implement in order to simplify your service’s manage and troubleshoot. These patterns, as shown in the figure 11.9 show the behavior of your service and health. They permit the monitoring system to observe and show the condition of a service and issue alerts in the event of an issue. They also help in identifying issues easier.

Distributed tracing – Give each external request an ID unique to it and track requests when they are transferred between services.

Exception tracking–Report any exceptions in an exception-tracking system, that de-duplicates the exceptions, informs developers and monitors the resolution of every exception.

Application metrics–Services store metrics, like gauges and counters, then connect them to a server for metrics.

Audit logging — Log user actions.

One distinctive characteristic of many the patterns mentioned is every pattern includes both a developer and operations component. For instance, take this Health check API. The developer is accountable to ensure that their service is able to implement an endpoint for health checks. Operations is the responsible monitoring system which periodically calls the API for health checks. In the same way, for the Log aggregated pattern, the developer is responsible for making sure that their services record relevant information, while operations is accountable for the log collection.

The management of production of an application are beyond the control of the developer, for instance monitoring the availability of hardware and its utilization. These are the main responsibility of the operations. However, there are a few ways that you as a ser-vice developers need to implement in order to simplify your service’s manage and troubleshoot. These patterns, as shown in the figure 11.9 show the behavior of your service and health. They permit the monitoring system to observe and show the condition of a service and issue alerts in the event of an issue. They also help in identifying issues easier.

Distributed tracing – Give each external request an ID unique to it and track requests when they are transferred between services.

Exception tracking–Report any exceptions in an exception-tracking system, that de-duplicates the exceptions, informs developers and monitors the resolution of every exception.

Application metrics–Services store metrics, like gauges and counters, then connect them to a server for metrics.

Audit logging — Log user actions.

One distinctive characteristic of many the patterns mentioned is every pattern includes both a developer and operations component. For instance, take this Health check API. The developer is accountable to ensure that their service is able to implement an endpoint for health checks. Operations is the responsible monitoring system which periodically calls the API for health checks. In the same way, for the Log aggregated pattern, the developer is responsible for making sure that their services record relevant information, while operations is accountable for the log collection.

Let’s examine the patterns in these, beginning with the Health Check API pattern.

By using the Health check API pattern

Sometimes, a service is operating but is unable to process requests. For example an newly-created service might not be able to take requests. The FTGO Con-sumer service, for instance, will take about 10 seconds to set up the database and messaging adapters. It’s useless that the deploy infrastructure redirect HTTP requests to an instance of the service until it’s ready for processing the requests.
Additionally, a service instance may end up failing without completing. For instance, a flaw can cause an instance Consumer Service to run out of connections to databases and be inaccessible to the database. The deployment infrastructure should not route queries to the service which is failing but is running. If the service instance doesn’t recover then the deployment infrastructure should stop it and create another instance.

A service instance should be able to inform the deployment infrastructure if it can manage requests. An ideal solution is to create a health check endpoint as shown in Figure 11.10. It is the Spring Boot Actuator Java library provides, for instance, an GET health endpoint /actuator, which returns 200 only it has been found to be healthy and returns 503 otherwise. Similarly, the HealthChecks .NET library implements a GET /hc endpoint (https://docs.microsoft.com/en-us/dotnet/ standard/microservices-architecture/implement-resilient-applications/monitor-app-health). The deployment infrastructure frequently calls this endpoint to check the state of health of the service and take the appropriate action in the event that it is unhealthy.
The Health Check request handler generally examines the connection of the service instance to other services. For instance, it could run tests against databases. If all the tests pass, Health Check Request Handler gives a healthy response like one that displays an HTTP 200 status number. If any test fails, it displays unhealthy responses like the HTTP 500 status number.

Health Check Request Handler could just return unspecified HTTP response, accompanied by the appropriate status code. It could also give a comprehensive description of the health status of the various adapters. The details are useful in troubleshooting. Because it can include sensitive data, certain frameworks, for instance Spring Boot Actuator permit you to control the level of detail included in the health endpoint’s response.

There are two things that you should be aware of when conducting health tests. The first is the setting of the endpoint that will report to the user about the health of an instance of ser-vice. The second concern is how to set up the deployment infrastructure to call the endpoint for health checks. Let’s begin by looking at how to configure the endpoint.

Implementing the HEALTH CONTROL THE ENDPOINT

The program used to implement the health check’s endpoint must be able to determine the health status for the particular service. A simple way to do this is to ensure that the instance of the service can access its infrastructure service externally. How to accomplish this depends of the specific infrastructure. The health check program can, for instance check that it is connected to an RDBMS by establishing the database connection and then running an experiment query. Another method is to run a fake transaction that mimics the call of the API service through a client. This type of health test will be more comprehensive, however it’s more likely to be expensive to implement and takes longer to run.

An excellent example of an health check library could be Spring Boot Actuator. As I mentioned previously it has the /actuator/health api. The program that implements this endpoint will return the results of performing the health checks. Utilizing the convention of configuration over design, Spring Boot Actuator implements an intelligent collection of health tests based upon the infrastructure services that are used in the operation of the. For instance, if the service is using the JDBC DataSource and Spring Boot Actuator creates the health check to execute the test query. Similar to that, if the application employs it’s RabbitMQ message broker it is automatically configured to run an health check to verify whether the RabbitMQ server is functioning properly.

You can also alter this behaviour by implementing additional health checks in your service. You can implement an individual health check by writing an appropriate class that implements an implementation of the HealthIndicator interface. This interface defines the health() method that is invoked through the implementation for the /actuator/health interface. It provides the result of the health test.

By referring to the health check’s endpoint

An endpoint for health checks isn’t any use if no one is calling it. When you install your service you need to set up the deployment infrastructure to call the endpoint. The method you use to do this depends on the specifics of the deployment infrastructure. For instance, as discussed in chapter 3.3, you can set up certain service registries like Netflix Eureka, to invoke the health check endpoint to determine whether the traffic is directed to the particular service instance. Chapter 12 explains how to set up Docker as well as Kubernetes to call the health check endpoint.

Applying the Log pattern of aggregation

Logs are an excellent instrument for troubleshooting. If you’re looking to find out the root of your problem the best place to begin is with log files. However, using logs in an archi-tecture of microservices is difficult. As an example, suppose you’re trying to solve a problem using getting order details() request. In chapter 8 the FTGO application implements this query by using API composition. This means that the log entries that you need are distributed over the logs of API Gateway, as well as various services, such as Order Service and Kitchen Service.

The answer is to utilize log aggregate. As as figure 11.11 illustrates the log aggregation pipe line transmits the logs from every service instance to a central log server. After the logs have been stored by the logging system, you can browse, view and then analyze them. You can also set alerts to be activated when certain messages appear in logs.

The log-in pipeline and the server usually fall under the responsibility of the operations. However, ser-vice developers are accountable to write services that create useful logs. Let’s begin by looking at how services generate logs.

How A SERVICE CREATES A Log

As a developer of services There are a few of aspects you should be aware of. You must first decide the best logging software to use. The next issue is how to record the log entries. Let’s look first at the log library.

Many programming languages come with several log libraries that help to produce properly organized log entries. For example, the most well-known Java log library options include Logback log4j, Logback, as well as JUL (java.util.logging). Also, there’s SLF4J which acts as a logging interface API that is used by various frameworks for logging. In the same way, Log4JS is a popular logging framework designed for NodeJS. A good way to utilize logs is to add calls to these logging libraries within your code. However, if you have specific requirements for logging that aren’t implemented by the log library, you might require defining the own log API which is a wrapper for an existing an existing logging library.

It is also important to determine which location to write to. Traditionally, you’d set up the log-ging framework to write data to the log file at an established location on the filesystem. However, with the newer deployment methods, like servers and containers, which are which are discussed in chapter 12 this is typically not the ideal approach. In certain environments, like AWS Lambda There isn’t an “permanent” filesystem for writing logs to! Your service must write to the format stdout. It will choose what the output from your service.

The LOG AGGREGATION INFRASTRUCTURE

The log-logging infrastructure is responsible gathering logs, keeping them, and allowing the user to look them up. The most popular logging system includes ELK. ELK stack. ELK comprises three open-source products:

Elasticsearch: A text search-oriented NoSQL database that is used for the log server

Logstash is a log pipeline that collects the logs of service and writes them to Elasticsearch

Kibana is a tool to visualize data designed for Elasticsearch

Other open source log pipelines for use include Fluentd as well as Apache Flume. Logging servers are a few examples. cloud services, like AWS CloudWatch Logs and various commercial products. Log aggregation is an effective tool for debugging in a microservice architecture.
Let’s take a take a look at distributed tracing which is a different method to understand the behaviour of a microservices-based app.

Utilizing the Distributed pattern of tracing

Imagine you’re an FTGO developer who’s trying to determine the reason why your getOrderDetails() request has become slower. You’ve eliminated the possibility of the issue being a network issue external to the issue. The increase in latency could be caused by an API gateway, or one of its services that it initiated. One possibility is to analyze the average response time of each service. The problem with this method is that it’s an average of requests, not a tim-ing breakdown of each request. Plus more complex scenarios might involve many nested service invocations. It is possible that you are not familiar with all the services. It isn’t easy to identify and identify these types of performance issues in microservices.


Pattern: Distributed tracer

Give each external request an ID unique to each request and document how it moves between the various services to another in an centralized server that allows visualisation and analysis.

One of the best ways to gain an understanding of what an application’s doing to employ distributed tracing. Distributed tracing works similarly as a profiler for performance within an application that is monolithic. It keeps track of details (for instance, the start time and ending time) regarding the tree of service calls executed when processing the request. It also shows how the services interact with each other during the processing of external requests. This includes an analysis of the places where time is being spent.

Figure 11.12 provides an example of how a distributed tracing system shows what happens when the API gateway receives an order. It displays the request that is inbound towards the API gateway and the subsequent request made by the gateway to the Order Service. Each request is traced by the distributed tracing server displays the process that was performed as well as the duration of each request.

A trace can be described as an external request. It is composed in one or several spans. A span is a representation of an operation and its most important attributes are the name of the operation, its start time, and the end time. A span may contain one or more child spans that represent interconnected operations. For instance the top-level span may represent the invocation to the API gateway, which is the instance in the figure 11.12. Its child spans represent service calls made through an API gateway.
If you look in through the logs looking for 8d8fdc37be104cc6, then you’ll see all log entries related to this request. Figure 11.13 illustrates how distributed tracing functions. There are two components to distributed tracing. There is an instrumentation library that is utilized by every service as well as an underlying tracing server distributed. Instrumentation libraries manage the trace and extends.

trace information, like your current trace ID as well as that of the span’s parent ID in outbound request. For instance, one popular standard for transmitting information about trace is called B3. B3 standard. Instrumentation libraries also report trace data to the distributed tracking server. The distributed tracing servers store the trace and gives an UI to display the traces.

Let’s examine the instrumentation library as well as the distribution tracing server starting with the library.

Utilizing an instructional library

The instrumentation library creates an array of spans, and sends it to the distributed tracer server. The service program could invoke the instrumentation library directly however, it would need to intertwine the instrumentation logic with other business logic. The best method is to employ interceptors, or aspects-oriented programming (AOP).

A fantastic example of AOP-based frameworks can be found in Spring Cloud Sleuth. It utilizes the spring Framework’s AOP mechanism to automatically integrate distributed tracer technology within the framework. This means that you need to include Spring Cloud Sleuth as a project’s depen-dency. The service won’t have to use a distributed tracing API, except for those situations which aren’t managed through Spring Cloud Sleuth.

INFORMATION ABOUT THE DISTRICTED TRACING SERVER

The instrumentation library transmits all the spans of data to a dis-tributed tracer server. The dis-tributed tracing service joins the spans to form complete traces , and saves them to databases. One well-known distributed tracing service can be found in Open Zipkin. Zipkin was created by Twitter. Services can send messages to Zipkin through either HTTP or messaging broker. Zipkin keeps the traces in a storage backend that is either or a SQL or NoSQL database. It comes with a UI that shows tracks, as illustrated earlier in Figure 11.12. AWS X-ray is a different example of a distributed tracer server.

Utilizing to the Application metrics pattern

One of the most critical aspects in the manufacturing environment are monitoring and alarming. As as figure 11.14 illustrates the monitoring system collects data, which offer crucial data about the overall health of an application. These metrics are gathered from all parts of the technology stack. Metrics can range from metrics for infrastructure like memory, CPU and disk utilization to application-specific metrics like the latency of service requests and the amount of requests executed. Order Service, for example collects metrics on the quantity of orders placed or rejected, as well as approved orders. The metrics are gathered through a metrics service that provides alerts and visualization.

Metrics are regularly sampled. Metric samples have three characteristics:

Name–The name given to the metric, like the jvm_memory_max_bytes metric or placed_

Value is a numerical value

Timestamp – The time that the sample was made.

Furthermore, some monitoring systems also support the concept of dimensions which are named-value pairs with no specific meaning. For example, jvm_memory_max_bytes could be listed with dimensions such as area=”heap”,id=”PS Eden Space” and area=”heap”,id=”PS Old Gen”. Dimensions are commonly used to give additional information, for example, the name of the machine, the company name or ID for the service’s service instances. A monitoring system usually aggregates (sums or averages) metrics samples across at least one dimension.

Monitoring is a broad field that falls under the oversight of the operations. However, a developer of services is accountable for two aspects of metrics. First, they need to configure their service to ensure that it can collect metrics on its performance. In addition, they should disclose the metrics of their service together with the other metrics associated with the JVM as well as the framework for applications in the metrics servers.

Let’s begin by looking at how an organization collects metrics.

COLLECTING SERVICE LEVEL METRICS

The amount of work you have to perform to collect metrics is contingent upon the frameworks your application is using and the metrics you’d like to get. A Spring Boot-based application can, for instance collect (and expose) the most basic metrics, like JVM measurements, simply by incorporating the micrometer metrics library an dependency, and then using couple of configuration lines. Spring Boot’s autoconfiguration is responsible for configuring the metrics library as well as making the metrics available. A service can only make use of micrometer metrics API Micrometer Metrics API directly if it collects metrics specific to its application.

This article explains the way OrderService can track metrics regarding the amount of orders made or approved and then rejected. It utilizes MeterRegistry, an Micrometer Metrics interface, which is provided by the interface, to collect specific metrics. Each method increments a appropriately designated counter.

TRANSPORTING METRICS TO METRICS SERVICE

The counter placedOrders is increased when an order is successful been placed.

Increases the counter for approvedOrders after an order has been accepted


Counter for rejectedOrders is increased in the event of an order being rejected


A service provides measurements to Metrics Service in one of two ways: pull or push. In the model of push, an instance of the service transmits measurements to Metrics Service by invoking an API. AWS Cloudwatch metrics, for example, utilizes this model.

By using the pull model using the pull model, the Metrics Service (or its agent running locally) uses an API for service to pull data from the instance of the service. Prometheus is an open free monitoring and alerting system employs pulling models.

The FTGO application’s Order Service uses the micrometer-registry-prometheus library to integrate with Prometheus. Since this library is on the classpath Spring Boot exposes a GET Prometheus /actuator/prometheus and returns statistics in the formats Prometheus is looking for. The metrics that are custom of OrderService OrderService are in the following manner:

$ curl -v http://localhost:8080/actuator/prometheus | grep _orders

HELP placed_orders_total

Counter

placed_orders_total 1.0

HELP approved_orders_total

TYPE approved_orders_total counter

approved_orders_total 1.0

The counter for placed orders is, for instance it is reported as a type of counter. The Prometheus server regularly polls this endpoint to obtain metrics. Once

the metrics are in Prometheus, you can view them using Grafana, a data visualization tool (https://grafana.com). You can also create alerts for these metrics, like when the change rate for the total of placed orders falls below a certain threshold.
Application metrics offer valuable insights on the behaviour of your application. The alerts generated by metrics allow users to swiftly respond to any production issues and, if it occurs, before it affects users. Let’s look at the best ways to react to a different source of alerts: the exceptions.

Utilizing the pattern of Exceptions tracking

A service shouldn’t ever report an error, and when they do it’s vital to determine the root of the issue. The reason for the exception could be indication of a problem or a bug in the program. The most common method of analyzing exceptions is to check the logs. You could even set the server that logs to notify you when an error is found inside the logs. There are some issues with this method:

Log files are designed to single log lines however exceptions can be a mix of multiple lines.

There’s no method to track the resolution of issues which occur within log file. You’ll need manually copy/paste the issue to an issue tracker.

There’s a good chance of having duplicate instances, but there’s not an automated mecha-nism that can take them all as one.


Pattern Tracking Exceptions

Services transmit the exceptions they receive to an central service which creates alerts, reduces duplicates and handles the resolution of any exceptions. See http://microservices.io/ patterns/observability/audit-logging.html.


The best option is to utilize an application that tracks exceptions. As the figure 11.15 shows, you set your service to report any exceptions for an exception-tracking service using the example of an REST API. This service sifts through duplicate exceptions, generates alerts and is responsible for the resolution of the exceptions.

There are a few ways to incorporate an exception-tracking service within your application. Your application could call an exception tracking services API directly. The best option is to utilize the client library offered by the service that tracks exceptions. For instance, the HoneyBadger client library offers a variety of easy-to-use integration tools, including the Servlet Filter which catches and records exceptions.

Extra tracking services

There are a variety of different exception-tracking services. Certain, like Honeybadger (www .honeybadger.io) is entirely cloud-based. Other services, like Sentry.io (https://sentry.io/welcomeor) has an open-source version that can be installed within your own infrastructure. They receive exceptions from your application and create alerts. They offer an interface to view exceptions and adjusting their resolution. An excep-tion tracking system usually provides client libraries that are available in a variety of languages.


The pattern of Exception Tracking can be an efficient method of quickly detect and address problems with pro-duction.
It’s also essential to keep track of the user’s behavior. Let’s take a look at ways to do this.

Implementing the Audit Logging pattern

The objective of audit logs is to keep track of every user’s actions. A log of audits is usually utilized to assist clients with customer support, verify conformity, and spot suspicious behaviour. Each entry in the audit log records the user’s identity as well as the specific action they took as well as their company object(s). A program typically keeps the audit log in an SQL database table.


The pattern: Audit logs

Note the actions of users in a database to aid with customer service, make sure that compliance is maintained and spot suspicious behaviour.

Include audit logs in your business process.

Utilize AOP, or aspect-oriented programing (AOP).

Use event sourcing.

Let’s take a look at each of them.

ADOPT AUDIT LOGGING CODE the LOGIC OF BUSINESS

The most simple method is to incorporate audit log code throughout the business logic of your service. Every method in your service could, for instance, make an account of audits and store it to the database. The disadvantage of this method is that it involves the code for auditing and logging with the business logic. This can reduce the maintainability. Another disadvantage is that it is potentially vulnerable to errors, since it relies on the developer creating audit logging code.

Utilize PROGRAMMING ASPECT-ORIENTED

Another alternative is to utilize AOP. You can utilize the AOP framework, like Spring AOP, to define instructions that automatically block each service method request and creates the audit log entries. This is a far more secure method, as it automatically records each service method that is invoked. The major drawback to using AOP is that advisor is limited to accessing the method’s name and its arguments, which means it is difficult to identify the business object that is being performed and to create an audit log for business purposes.

Utilize Event Sources for SOURCING

The third and last alternative is to apply your business logic by using event source. In chapter 6 Event sourcing will automatically create an audit log of cre-ate and update processes. It is necessary to note the name of the user for every event. The only drawback to using event sourcing is that it doesn’t keep track of queries. If your application needs to create log entries to record queries, you’ll need choose another option in addition.

Making services using the Microservice chassis model

This chapter discusses a myriad of aspects that a service should implement, including metrics and reporting errors through an exception tracking system, the logging as well as health check-ups, configuration externalized and security. Furthermore, as explained in Chapter 3, ser-vices might be required to handle the discovery of services and install circuit breakers. This isn’t something you want to build completely from scratch every when you create the new ser-vice. If you do it could be some days, or even weeks, before you could write that first business logic.

Pattern Microservice chassis

Build your services on the framework or collection of frameworks to address issues that cross-cut, like exception tracking health checks, logging externalized configu-ration and distributed tracer. See http://microservices.io/patterns/microservice-chassis.html.


A faster method for developing service is by building your service on top of a microservices framework. As the figure 11.16 illustrates that a microservices chassis is a framework, or framework that handles these problems. When you are using a microservices chassis you will write a small amount of to no code to deal with these issues.
Utilizing the microservice chassis

Microservices is an underlying framework or set of frameworks that address various issues including:

Externalized configuration

Health checks

App metrics

Service discovery

Circuit breakers

Distributed tracer

It greatly decreases the quantity of code you have to write. You might not even have to write code. Instead, you set up the microservice chassis to meet your needs. Microservice chassis allows you to concentrate on the development of the busi-ness logic of your service.

The FTGO application utilizes Spring Boot as well as Spring Cloud as the microservice chassis. Spring Boot provides functions such as externalized configuration. Spring Cloud provides functions such as circuit breakers. It also provides client-side service discovery, though the FTGO application is dependent on the infrastructure used for discovery of services. Spring Boot and Spring Cloud aren’t the sole microservice frame-works that are chassis-based. If, for example, you’re writing services in GoLang, you could use either Go Kit (https://github.com/go-kit/kit) or Micro (https://github.com/micro/micro).

One disadvantage that comes with using a microservice chassis is that it requires one for each platform/lan-guage combination you employ to build services. However, it’s probable that most of the functions that are implemented by microservice chassis will be performed in the framework. As an example, as discussed in the chapter 3 a variety of deployment platforms handle the discovery of services. Additionally is that many aspects of network-related tasks of a microservices chassis will get handled through what’s referred to as a mesh of services, an infrastructure layer operating independently of the service.

Microservices chassis through mesh for service

A microservice chassis is an excellent option to handle interconnected issues, such as circuit breakers. However, one of the drawbacks to making use of a microservice chassis is the fact that you require one for every programming language you are using. For instance, Spring Boot and Spring Cloud are great when you’re a Java/Spring programmer however they’re not of any assistance if you’re trying to create a NodeJS-based application.


Pattern of mesh for service

All network traffic that flows in and out of the services by a layer of networking that addresses different issues, including circuit breakers as well as distributed tracing load balancing, service dis-covery and rule-based routing of traffic. See http://microservices.io/ patterns/deployment/service-mesh.html.


A new approach to avoid this issue is to incorporate the functions of the service, in what’s called a mesh. Service meshes are a net-working infrastructure that facilitates interaction between a service and other services as well as external applications. As the figure 11.17 illustrates the entire network traffic that flows in as well out of service flows via the mesh. It handles various issues like circuit breakers as well as distributed tracer, load balancing, service discovery and rule-based traffic routing. Service meshes can protect interprocess communication making use of

A TLS-based IPC among services. In the end, you do not have to incorporate these specific concerns in your services.

When using a mesh of services the microservice chassis becomes far more simple. It is only required to implement issues which are tightly integrated with the application’s code like externalized health checks and configuration. The microservice chassis should allow distributed tracing to propagate information about distributed tracing including B3 standard headers. B3 standard headers that I mentioned at the beginning of section 11.3.3.


The current status of mesh service implementations

There are a variety of service meshes that are implemented, such as the following:

Istio (https://istio.io)

Linkerd (https://linkerd.io)

Conduit (https://conduit.io)

At the time of writing Linkerd is the oldest of the three, with Istio and Conduit being in active development. To learn more about this revolutionary new technology, have a look at the documentation for each product.


The concept of a service mesh is a very promising concept. It relieves developers of having to handle a variety of interconnected issues. Additionally, the capability of a mesh service to direct traffic allows the separation of deployment and release. This gives you the capability to put the latest model of the service in production, but only distribute it to specific users, like internal testers. Chapter 12 covers this idea further , describing how you can deploy services with Kubernetes.

It is essential that a service fulfills its requirements for functionality, but it also needs to be secure, customizable and easily observable.

A lot of the security aspects in a microservices architecture are the same as in a monolithic structure. But there are certain aspects of security for applications that are distinct in the way that user information is shared across an API gateway API gateway, and service and who is accountable for authentication and authorization. An approach that is commonly employed is to use API gateways to use the API gateway to verify users. The API gateway uses an opaque token, like JWT, for each request to an API service. The token contains the name of the principal as well as their duties. The service uses the information contained in the token to grant access to resources. OAuth 2.0 is a good base for security in a microservices architecture.

A service usually relies on several external services, like message brokers or databases. The location of the network and the credentials of each service will often be dependent on the specific environment the service is operating within. You should apply to the Externalized design pattern to then implement an approach that provides the service with configuration parameters at the time of running. The most common method can be to have the deploy infrastructure offer these properties through operating environment variables in the system or through an properties file whenever it creates a new service instance. Another possibility is for the service instance to pull its configuration from a properties server.