STRUCTURE OF THE SYSTEMS

DivyanshiSingh
5 min readJun 22, 2021

“Wannabe smart or enjoy the tricks played by smart people?
It’s always better to play magic than to see magic.”
-Divyanshi Singh

Let’s dig deep into the facts about how the magical things are functioning behind the application that we use. We are always surrounded by enormous systems and applications in our day-to-day life. Let’s ponder more over the architecture of the systems and how the complexities are being resolved.

Basic components of the system-
1- Client: We are denoted as clients. All the mobile apps, desktop apps, browsers like Chrome, Mozilla Firefox, Safari, etc come under the one shell i.e client. It’s basically the customer of the service.

2- DNS: Whenever we request to search anything eg- if we search for any website URL then our request is first entertained by DNS(Domain Name Server). It contains the mapping of Domain names with their respective IP address. So, when the request arrives at DNS it checks the corresponding IP address and returns that address as a response.

3- Load Balancer: When we get an IP address from DNS, then it interacts with Load Balancer. That IP address is basically the IP address of the Load Balancer of that particular system. It can also be understood as a gateway between the client and the internal system.

4- Application Servers: Here our request is passed from Load Balancer and certain functions according to the requests are performed. It is the place where calculation of required resources is done and according to the needs, further requests are sent.

5- Storage Layer: It is the place where all the data is stored. It can be denoted as a database management layer. Certain queries are answered here and the response is sent back to application servers. They give a response to the load balancer and finally the response is sent back to the client.

Efficient implementation of DNS-
DNS response should be quick with low latency.
Latency can be defined as the time taken to get a response after you have made a request.

Latency ∝ 1/physical distance

For decreasing the latency we implement the spread-across concept. We keep a local DNS which keeps the mapping of IP addresses that are relevant to that particular area only and if some other request is made whose IP address is not available on our local DNS then local DNS make a request to the hierarchical above DNS for the same IP address.

ISP (Internet Service Provider) will manage local DNS.

Some chunk of data has to move from secondary storage to primary storage to process it. This movement of data becomes slower, and cheaper as we move far away from the processor.

For this reason in DNS key-value mapping of domain names with their respective IP addresses should be present in the primary storage.

Concept of Scaling Systems-

Earlier we used to confine our whole system needs to a single machine. Like Delicious, back in the 2000s used to provide backend services of bookmarks. The storage was limited to the storage of one machine like 120GB, which was responsible for storing data like user profiles, their bookmarks, search history, etc.
Then within a couple of days that 120GB became exhausted when the requests coming on the servers grew and thus making one single machine incapable of performing its task. Here comes the concept of scaling.

Vertical Scaling-

It’s like buying a bigger box of greater storage capacity. If our 120GB gets exhausted and we buy a hard disk of 1TB, this is called Vertical Scaling.

Benefits:
1- It is easier to implement as we just have to replace our old storage with new storage.

2- It consumes less power as there is only one storage box.

3- It is reliable for small businesses or start-ups.

Drawbacks:

1- There are some mechanical limitations on how big a machine can be built because they are made with physical things and all of them dissipate energy.

2- Very costly as storage will be exhausted rapidly.

3- Demand is variable. Sometimes demand might decrease then the big box will be a waste for us.

Horizontal Scaling-

Horizontal scaling is distributed computing involving distributed storage. When we add multiple storage devices having relatively the same storage capacity in such a manner that they function as a single logical unit then it is called Horizontal Scaling.

Benefits-

1- There is no wastage of resources and it is cheaper.

2- We can manipulate the box/storage capacity i.e if extra storage is not needed, we can switch off the extra boxes.

3- Efficient to handle the extra load and thus making it suitable for big and complex businesses.

Drawbacks-

1- Difficult to maintain and code to implement as a lot of communication has to be established between all the boxes.

2- Going to involve a lot of network calls. If there are multiple boxes then they need to interact with each other and this can happen only with a network.

3- Network calls increase latency.

4- There are chances of crashing when network calls fail but developers have come across to resolve crashing problems with the help of the CAP theorem.

We would never like that our client should know whether our system is running on distributed computing i.e our system is horizontal scalable or vertical scalable. So, here comes Load Balancer that acts as a gateway between the client and our internal servers.

What happens when Load Balancer Crashes?
Even if all the boxes are running well, all the network calls are getting their responses back and each communication between them is working fine then also if Load Balancer crashes the client will not get its response back.
Load Balancers are basically responsible for routing and thus we keep its action very fast by minimizing its responsibility.
Even if any Load Balancer goes down, we keep another Load Balancer waiting to take its place.

Types of Load Balancer-
Hardware Load Balancer:
These load balancers are very less intelligent. They are only responsible for transferring the requests coming to it to app servers. They do not perform routing and also don’t have any RAM. They are very less customizable.
Software Load Balancer: These load balancers are smart which have a good amount of local RAM. They just don’t transfer the requests coming on it, instead also perform routing. Routing here is defined as the responsibility of transferring the request to the available server. They also have great computing power so that latency can be decreased.

This was just the beginning of system design concepts. Stay with me for upcoming blogs.

--

--