Network Kings

Multi-Access Year Deal

Get 55+ courses now at the best price ever! Use Code:    MULTIYEAR

d :
h :
m

What is an IP Address? – Explained 

IP Address
IP Address

What is an IP ADDRESS?  

The IP address, known as Internet Protocol Address, is the address of your devices on the internet.  

Suppose you want to call and invite your friends for a weekend party but don’t have their phone numbers. You decide to visit and invite them personally, but remember that you do not have their addresses either. The weekend is RUINED.  

So, we need a phone number to make a call and an address to visit their place.  

Similarly, all the devices with internet connectivity in a network need an address to communicate with each other. The address should be unique to reach the correct network devices.  

The server of Amazon needs a different IP Address than that of Flipkart so that if you want to reach Amazon, you don’t get redirected to Flipkart.  

Also, your computer devices need to have a unique IP Address so that when Amazon responds to your request, it does not reach any other devices.  

So, an IP Address is a logical address assigned to every device in a network. It allows a host on one network to interact with the host on a different network.

What are bits and bytes?

Bit: Bit is the smallest unit of storage. It is either 1 or 0  

Bytes: A collection of 8 bits is known as Bytes 

What are the types of IP Addresses?

IP Addresses are of two types.  

IPv4:

 IPv4 is a set of 4 dotted decimal numbers, each number is known as an octet or bytes, and each octet is divided by a dot. IP Addresses can be depicted in the following three ways:  

  • Dotted Decimal – 172.16.30.55  
  • Binary – 10101100.00010000.00011110.00110111  
  • Hexadecimal – AC101E37  

Look at the binary notation of the IPv4 IP Address. IPv4 is made up of 32 binary bits, which are divided into four parts known as octets (because of 8 bits). Each decimal ranges from 0 to 255 or 00000000 to 11111111 in Binary notation. IPv4 has around 4.3 billion IP Addresses which signifies that around 4.3 billion network devices can connect simultaneously (2^32 = 4294967296)  

Classes of IPv4 

IPv4 is divided into five classes-  

Class A  

  • Class A ranges from 0-127  
  • It is used for default routing (0.0.0.0/0) and LAN card testing (127.0.0.0 to 127.255.255.255)  

Class B  

  • Class B ranges from 128-191  
  • APIPA (Automatic Private IP Addressing) in DHCP uses this class range (169.254.0.1 to 169.254.255.254)  

Class C   

  • Class C ranges from 192-223  
  • There is no reserved IP in this Class  

Class D  

  • Class D ranges from 224-239.  
  • This class is reserved and not assigned to devices.  
  • This class is used for Multicasting purposes.  

Class E  

  • Class E ranges from 240-255  
  • This class is reserved for research and development purposes. 

Class 

Range 

Subnet Mask 

Default CIDR 

Class A 

0-127 

255.0.0.0 

/8 

Class B 

128-191 

255.255.0.0 

/16 

Class C 

192-223 

255.255.255.0 

/24 

Class D 

224-239 

N/A 

 

Class E 

240-255 

N/A 

 

The first octet of Class A represents Network ID 

The first 2 octets of Class B represent Network ID 

The first 3 octet of Class C represents Network ID 

Class of IP Address

IPv6:

Although IPv4 has around 4.3 billion addresses which seems to be a lot, network devices already crossed the set mark. Hence, IP Addresses were scarce. IPv6 comes to the rescue. IPv6 is a 128-bit address represented by 8 Hexadecimal numbers separated by colons (:). 

For example – 2620:cc:8000:1c82:544c:cc2e:f2fa:5a9c.  

Ipv6 has a total of 2^128 addresses which is enough as of now.

To understand in detail about IP address and its types, do watch this highly informative video by our super mentor Atul Sir wherein he has dissected every possible information about IP address:

What is the difference between IPv4 and IPv6?

  • IPv4 is a 32-bit protocol, while IPv6 is a 128-bit protocol.  
  • In the case of IPv4 bits are separated by a dot(.) but in the case of IPv6, binary bits are separated by a colon (:). 
  •  IPv6 is alphanumeric but IPv4 is numeric.  
  • Checksum fields are supported by IPv4, but not in the case of IPv6
  •   Variable Length Subnet Mask is supported by IPv4 but not by IPv6  

Note: Read out this blog to learn difference between IPV4 and IPV6 in Detail.

What is the classification of IP Address?

  • Private IP Address 

Each device connected to Local Area Network is assigned Private IP Address. It is a local address and cannot be used to go to the internet.  

It is assigned by your local gateway router and is visible only in the internal network. Devices on the same network are assigned unique Private IP Addresses so that they can communicate with each other. Two different LAN networks can have the same Private IP address as these IP Addresses are not routable to the internet, hence no issue will arise.  

For example – Your computer and your friend’s computer sitting at different locations can have the same Private IP Address. 

  • Public IP Address 

Public IP Addresses are assigned by your Internet Service Provider (ISP) to your router. These addresses are routable to the internet, and hence these help devices with the private address to route to the internet (using a method known as NAT). 

Class 

Private IP 

Public IP 

 

Class A 

 

10.0.0.0 – 10.255.255.255 

 

1.0.0.0 – 9.255.255.255 

 

11.0.0.0- 126.255.255.255 

 

 

Class B 

 

172.16.0.0 – 172.31.255.255 

 

128.0.0.0 – 172.15.255.255 

 

172.32.0.0 – 191.255.255.255 

 

 

Class C 

 

192.168.0.0 – 192.168.255.255 

 

192.0.0.0 – 192.167.255.255 

 

192.169.0.0 – 223.255.255.255 

 

  • Dynamic IP Address: 

As the name suggests, Dynamic IP Address keeps on changing. Every time you connect your device, a new dynamic IP Address is assigned. It is allocated by your Internet Service Provider. ISP buys a large pool of IP Addresses and assigns them to their customers.  

  • Static IP Address 

In contrast to Dynamic IP Addresses, Static IP Addresses do not change. Most internet users do not require Static IP Address. They are used by companies like Google, YouTube, Facebook, etc which means they are used by DNS Server.  

  • For example – The IP Address of Google is 8.8.8.8 which remains intact and does not change.  

8.8.8.8 is registered as Googe.com on the DNS server. If the IP Address of Google is changing (It has used a Dynamic IP Address and not a Static IP Address), then every time user has to search for the latest IP Address of Google.com. Hence for such cases, Static IP Address is used.

Before you move further, elevate your IP knowledge base with the video below on IP Addressing and Subnetting:

  • Website IP Address: 

These IP Addresses are used by small businesses/blog website owners who don’t have their servers and rely on hosting companies to host their websites on their behalf.  

There are two types of Website Addresses-  

  • Shared: WordPress is an example of a shared hosting IP Address. In this case, more than one website uses the same IP Address. Most small companies use Shared IP Addresses to reduce their cost. It can be used if traffic on your website is not huge.  
  • Dedicated: It is the IP Address that is used by a single company. A dedicated Website IP Address is used when traffic on a particular website is usually high. It is much more secure than Shared Website IP Address. 

Transmission Control Protocol: Master the concepts of TCP 

transmission control protocol
transmission control protocol

In the world of networking, ever wondered how our emails arrive to us without any missing text or attachments, our web pages load without any error and missing details, and files are downloaded intact? Behind the scenes are always interesting, be it a movie or technology.  

Protocols are working on different layers to achieve these goals. TCP, also known as Transmission Control Protocol, is one such protocol. It might look complex but need not worry! In this article, you will get in-depth information about Transmission Control Protocol.   

What is TCP?

TCP stands for Transmission Control Protocol which works on the Transport layer i.e., layer 4 of the OSI model and ensures the transmission of packets from source to destination.  

Transmission Control Protocol is the most important protocol of the Internet Protocol Suite. It works by dividing data into small units called segments and making sure that they are delivered successfully from the source to the destination device, and then reassembling them at the destination.   

TCP’s ability to establish a connection before data transmission begins is one of its advantages, and that is the reason it is called a connection-oriented protocol. To understand how the Transmission Control Protocol is reliable, let’s take an example  

Suppose Ankit is looking for CCNA training to start his career in the field of networking. While surfing he got to know about NETWORK KINGS 

He sends an HTTPS request using his browser to access the website of NK from NK servers and browse the vast catalogue available for networking courses. But what happens if in case Ankit’s request to access the webpage of NK is lost or the response of the server of NK is lost in transit? Webpage fails to load on Ankit’s web browser.   

Most of the application layer protocol (HTTPS in our case) wants a way to ensure the delivery of the network across the network. Therefore, Transmission Control Protocol (TCP) plays an important role in it.  

How does Transmission Control Protocol ensure data/segment delivery?

Now the question arises, how TCP is ensuring error-free data delivery?  

It is achieved using a process called a 3-way handshake.  

What is a 3-way handshake?

TCP provides a reliable connection between 2 network devices by using a process called TCP 3-way Handshake. 

STEP 1: SYN  

  • Initially, the client sends a synchronization (SYN) message to the server when it wants to connect to it.  
  • SYN flag is set to 1. The message also contains Seq. no. (32-bit random number)  
  • Maximum segment size and window size are also set.  
  • If the window size is set to 10000 bits and the Maximum segment size is set to 100 bits, then a maximum of 100 data segments can be transmitted at a time (10000/100=100).  

STEP 2: SYN-ACK  

  • On receiving the SYN packet from the client, the server acknowledges the request by sending an SYN-ACK message. It includes a unique acknowledgement number that confirms the receipt of the client’s SYN packet and the server’s SYN packet to initiate synchronization with the client.  
  • By setting the Acknowledgment (ACK) flag to 1, the server acknowledges the client’s request.  
  • ACK is the response to the segment sent by the client and SYN indicated the sequence number by which the server starts its segments.  
  • The server sets the SYN flag as 1 which confirms that the server also wants to establish a connection with the client.  
  • Seq. no used by the server is different from that of a client.  
  • If the Client’s SYN sequence no. is X, then the servers ACK sequence no. will be X+1  

STEP 3: ACK  

  • The client sends back ACK to the server when it receives the SYN from the server which confirms the delivery of the server’s SYN-ACK packet.   
  • If the server’s SYN sequence number is Y, then the Client’s ACK sequence number will be Y+1  
  •  After receiving the ACK from the client, the connection is established and the process of data transmission can start between the client and server.  

That is why TCP is known as Connection-Oriented Protocol.  

The 3-way handshake plays an important role in a reliable connection. It ensures that the client and server are ready to send and receive data by predetermining sequence number, and window size and confirming each other availability.  

NOTE: To understand the process of TCP error-free data delivery, let’s continue with our previous example.   

If in case Ankit’s request to access the webpage of NK is lost or the response of the server of NK is lost in transit? Webpage fails to load on Ankit’s web browser. To ensure that the data/segments do not get lost, TCP’s 3-way Handshake plays a crucial part.   

Nk’s server sends out the segments to the client (Ankit’s PC) with 3 segments (1,2,3).  

Since we have understood that for every SYN there is an ACK. Also, the Client’s ACK sequence number is Server’s SYN+1 and Server’s ACK Sequence number is Client’s SYN+1   

Let us suppose that segment 2 is lost. The client will not be sending any acknowledgement to the server for segment 2. Also, the client realizes that there is a missing Segment no i.e., 2. The server will again send out segment number 2 to the client and if for some reason, the server does not send segment 2, then the client can request segment 2 again. At the client’s end, the segments are assembled to form the complete data with the help of sequence numbers.   

In this way, Ankit can easily search for the networking courses available on NK’s website hassle-free and start his career as a skilled network engineer.   

What are the advantages of TCP?

From the above example, we can observe the following advantages of TCP: –  

  • Reliability – Since it is a connection-oriented protocol, it is more reliable for sending E-mails, web browsing, File sharing, Remote access (used by SSH), etc. Data are not corrupted when we are using TCP as our Transport layer protocol.  
  • Flow control – TCP uses a sliding window mechanism for flow control. It can adjust the rate of data transmission according to the receiver’s ability to process the data which also leads to reducing the risk of segment loss due to congestion. 
  • Error detection – It provides error detection by using checksums

What are the disadvantages of TCP?

The disadvantages of TCP are as follows-  

  • Latency – TCP is a comparatively slower protocol than UDP which comes at the cost of latency. Acknowledgement packets and retransmission of lost packets can cause delays.   
  • Overhead – It has a relatively higher overhead than UDP. Due to the error-checking mechanism, additional control information leads to higher resource consumption and overhead.   

NOTE: Due to low latency and overhead, it is not suitable for applications with low latency, like real-time gaming, VOIP services, and live streaming. 

Conclusion

For reliable delivery of DATA, TCP can be trusted. It is a connection-oriented protocol that uses a process called a 3-way handshake to ensure the delivery of data between network devices. Besides some disadvantages, TCP has many advantages and is a widely used protocol. 

Understanding the STP Election Process & How it Takes Place!

STP ELECTION PROCESS
STP ELECTION PROCESS

Redundancy in any network is necessary to provide a backup path if one link goes down, but it may also lead to a loop in a network and hence network congestion.   

Networks get configured with redundant paths. Although redundancy is a crucial aspect of network design, it may also lead to form a loop. The loop can occur when data travels from source to destination but gets stuck in a circle due to the redundant link provided. To avoid data looping, the Spanning Tree Protocol gets used.   

Spanning Tree Protocol (STP) works on Layer 2 of the OSI Model and prevents loops in network topology and prevents ethernet network loops while providing redundancy.   

Switches S1 and S2

Let us take an example of the above Network Topology.   

Switches S1 and S2 are connected via link 1.  

S3 is a redundant switch providing redundancy in a network.   

If the link between S1 and S2 goes down for any reason, Data can travel to S2 via S3.     

Suppose S1 sends data to S2 via link 1.   

Data will also travel to S3 via link 2, then to S2 via link 3, and again back to S1. 

Hence a loop is formed where data travels from S1 to S3 to S2 and again to S1.   

Hence in the absence of STP, there would be no redundancy. STP blocks some ports of switches with the help of STP election to prevent looping. The blocked port can enable itself when there is a change in topology or case of link failure and hence providing redundancy.   

NOTE: To understand how STP Election works, how the port is blocked, which port to block, and dive into the world of STP, we need to understand some basic terminologies and concepts used in Spanning Tree Protocol.   

What is Bridge ID?

Bridge ID is a combination of Bridge Priority and MAC Address which is unique for every switch. Bridge ID is a numerical value that ranges from 0 to 65535. 

MAC Address also called Media Access Control Address is a unique number assigned to the Network Interface Controller (NIC) of a device. It is a sort of Hardware address and is used at the data link layer. It is a 48-bit address.  

What is Root Bridge?

The switch with the lowest priority becomes a Root Bridge. If, in case, the priority of 2 or more than 2 switches is the same, then the switch with the lowest Bridge ID becomes Root Bridge.

What are Port Roles?

The port roles are as follows- 

  • Root Port – Root port is the port on the non-root switch. It is directly connected to the root bridge and provides the shortest path with the least cost to the root bridge.  

A non-root switch always has at least 1 root port.  

  • Designated Port – Designated port never blocks the traffic (Frames). The Port that connects the link having the least cost becomes Designated Port. The ports on the Root Bridge are always Designated Ports. But all the ports of the non-root bridge cannot be designated ports.   
  • Forwarding Port – These ports always forward Frames. Designated ports are in the state of forwarding port.  
  • Blocked Port – Blocked ports do not forward frames and help in preventing loops. It only listens to BPDUs. Any port other than the root port & designated port is a blocked port.

What are STP Timers?

3 types of STP timers help in loop prevention, namely- 

  • Hello Timer – The Hello Timer determines the interval at which the root bridge sends out STP Bridge Protocol Data Units (BPDUs). BPDUs are crucial for bridges to exchange information, establish the root bridge, and maintain the network’s topology. The Hello Timer is set to 2 seconds.  
  • Forward Delay – The Forward Delay represents the time it takes for a port to transition from the listening state to the learning state and finally to the forwarding state. During the Forward Delay period, a bridge listens for BPDUs to detect any changes in the network topology and ensure network stability. By default, the Forward Delay is set to 15 seconds.  
  • Maximum Age – Maximum Age is the maximum time allowed for a bridge to receive a BPDU before it considers the topology has changed. If a bridge does not receive a BPDU within the Max Age interval, it assumes that the root bridge or connectivity has been lost. The Max Age timer is to 20 seconds.  
  • Forwarding Delay – The Forwarding Delay signifies the time required for a port to transition from the blocking state to the forwarding state when a topology change occurs. It allows bridges to converge and stabilize the network after a topology change. The Forwarding Delay timer is 15 seconds.

What are Port States?

  • Disabled – It is the state where switches get connected for the very first time and hence, they are not forwarding any frames.  
  • Blocking – In this state, the port to do forwards any frames and discards if frames are received but listens and processes BPDU.   
  • Listening – After the Blocking state, the port enters the Listening state. In this state, the port still does not forward frames but actively listens to BPDUs. The port uses this time to learn about the network’s topology changes and prepares for the transition to the next state.  
  • Learning – In the Learning state, the port starts to learn MAC addresses by observing the source MAC addresses of received frames. It continues to listen to BPDUs and builds its MAC address table. However, it still does not forward frames during this state.  
  • Forwarding – The port comes to the forwarding state from the learning state and starts frame forwarding in a network. Port also processes the BPDU, and hence address table remains updated. 

What is BPDU?

BPDU also known as Bridge Protocol Data Unit is an essential component of the Spanning Tree Protocol. BPDU is a message transmitted by each switch which helps to exchange information about the network topology and hence helps in STP Election.   

There are two types of BPDU, namely- 

Configuration BPDU – This BPDU gets exchanged when switches are connected or enabled. It is the primary BPDU which includes information about network topology and some following important network information: –    

  • Root Bridge ID: – Bridge ID of the root bridge in a network.   
  • Bridge ID: – Includes Bridge priority and MAC ADDRESS.   
  • Path Cost: – Includes the cost of the path to travel the root bridge.   
  • Port roles: – Includes the roles assigned to each port such as root port, designated port, or blocked port. 

Topology Change Notification BPDU (TCN) – TCN is transmitted when there is any change in the topology of a network such as a link failure, the addition of a new switch, link recovery, etc. When a switch detects a change in a network, it generates TCN and broadcast it to a neighbouring switch. Another switch will respond according to the changes occurred in a network.   

For example: If a link goes down, Switches will reconverge the path to the backup link.   

Hence exchange of Configuration BPDU and TCN BPDU helps switches to maintain a loop-free path along with responding to changes in the network’s topology. The Multicast Destination MAC address used by BPDU is 01:80:C2:00:00:00.

Step-By-Step Guide to Understanding the STP Election Process

Let’s understand how the Spanning Tree Protocol election work and how and which port is blocked to prevent the looping of the network. 

MAC ADDRESS

Let’s take an example of the above topology. 

Switches S1, S2 and S3 have MAC ADDRESS: – 00.00.00.00.00.01, 00.00.00.00.00.02, 00.00.00.00.00.03 respectively. 

The priority of all the 3 switches is 32768. (By default, Cisco Switches has priority set to 32768 but it can also be changed and configured manually). 

Steps involved in the (Spanning Tree Protocol) STP Election process: –  

1. Bridge Priority Determination: – 

When Switches are turned on, they will start sending Configuration BPDU containing Bridge ID, Cost to the Root Bridge, and STP Timers (Hello Timer, Max Age Timer, Forward Delay Timer) 

The bridge ID is 8 bytes. 
It is a combination of Bridge Priority and MAC ADDRESS. 

Bridge Priority and MAC ADDRESS

2. Root Bridge and Root Port Election: –  

Initially Every Switch consider itself to be a ROOT BRIDGE. When Switch receives BDPU with a lower Bridge ID (Superior BPDU), it will stop its configuration BPDU and start forwarding Superior BPDU to its neighbours. 

Bridge ID (Bridge Priority + MAC ADDRESS) starts with Priority hence Switches with a lower Priority value (Lower the Priority Value, Higher the Priority of the switch) become ROOT BRIDGE. 

Suppose the Priority of 2 or more switches is the same, the Switch with a Lower MAC ADDRESS becomes the ROOT BRIDGE. 

In our example, Switch has the same priority but the MAC ADDRESS OF S1 is lowest, hence it will become ROOT BRIDGE. 

Also, the ports on the ROOT BRIDGE become ROOT PORTS. 

ROOT PORTS never comes to a blocking state and always forwards the ethernet frames. 

These ports do not block traffic. 

ROOT BRIDGE

3. Designated Port Election: –  

When the Root port Is elected, Designated ports are identified on the NON-ROOT BRIDGE. 

Designated ports are those which are connected via a link having the lowest cost to reach the root port of the root bridge. 

Costs are determined by the type of Link switches that are connected. Some default costs of links are given below: –  

Speed 

Link Cost 

10 Mbps 

100 

100 Mbps 

19 

1Gbps 

4 

10Gbps 

2 

4. Blocking Port Election: –  

We now know how root ports and designated ports are elected. Let us now talk about how to select a port that will be blocked. 

Port connected via a link having the highest cost to reach the ROOT BRIDGE will be blocked and it will not transmit any ethernet frame unless a change in the topology takes place. 

CASE 1: -

All the links Connecting Switches have the same cost. 

ROOT PORTS

In the above Topology, Switches are connected via a 1gbps link which has a cost equal to 4. 

The direct cost of Switch 2 to reach ROOT BRIDGE i.e., S1 is 4 and the indirect cost to reach Switch 1 is 8 

For S3 also, the direct cost to reach ROOT BRIDGE i.e., S1 is 4 and the indirect cost to reach Switch 1 is 8. 

ROOT BRIDGE
ROOT BRIDGE

Indirect and direct costs for both the switch are equal and hence there is a tie. 

In such cases where there is a tie between direct and indirect costs, the Election process again happens based on Bridge ID. 

ROOT BRIDGE

Priority of S2 and S3 is equal but the MAC ADDRESS of S2 is lower i.e., 00.00.00.00.00.02 

And hence S2 wins the STP election and it will become Designated Switch on both the port of S2 will become Designated Port. 

Now the port on S3 will be blocked to avoid the loop. 

To decide which port will be blocked, the Cost of both the link to reach S1 i.e., Root Bridge is calculated again. 

The direct cost to reach S1 is 4 which is lower than the indirect cost 

Hence Port connected via a link that has a higher cost will be blocked. 

ROOT BRIDGE

S1 becomes the Root bridge because the Bridge ID of S1 is the lowest 

S2 becomes a Designated Switch although the cost to reach S1 is the same but because its Bridge ID of it is lower than S3  
S3 has 1 port as the designated port while the other port is blocked. 

CASE 2: -

Links have different costs. 

ROOT BRIDGE

S1 and S2 are connected with 100 Mbps link which has a cost equal to 19. 

S1 and S3 are connected with 1Gbps which has a cost equal to 4. 

S3 and S2 are connected with 100 Mbps link which has a cost equal to 19. 

ROOT BRIDGE
ROOT BRIDGE

The direct cost of S2 to reach S1 is 19. 

The direct cost of S3 to reach S1 is 4. 

Since the Direct cost of S3 is lower hence Ports on S3 will not be blocked and ports on it will become Designated Ports. 

The direct cost of S2 to reach S1 is 19 and the Indirect cost to reach S1 is 23 hence port that has a higher cost path will be blocked. 

ROOT BRIDGE

In this way, by determining Root Bridge, Root Ports, Designated Ports, and Blocking Ports, the Spanning Tree Protocol creates a loop-free network. 

Traffic flows along the designated paths, ensuring redundancy and fault tolerance in the network.

Why port with a higher cost is blocked?

The higher the Speed of the link, Lower the cost, and vice versa. 

If a port with a higher speed is blocked, then the network will become slow and inefficient. 

Also, if a port with a higher speed is blocked, then there is no sense to invest in a higher-speed link which is also expensive.

Conclusion

As network engineers, our goal is to make network communication more efficient and hassle-free. 

Spanning Tree Protocol is one such protocol that helps to make a loop-free path and remove network congestion at the Data link layer (Layer 2). 

The concept behind blocking a port is to elect a ROOT BRIDGE first and then find the path which has the least cost to reach the ROOT BRIDGE. The port connected to a link with the higher total cost to reach the ROOT BRIDGE is blocked. 

The least cost implies the higher speed of the link and hence it is favourable to block the port with a lower speed (i.e., higher cost) to make the network faster. 

(Please note that the cost mentioned here does not signify the monetary cost but it is a parameter used to find the shortest path.)

How Switches Forward Ethernet Frames? Explained

ethernet frames
ethernet frames

A network cannot communicate without frames as they provide a well-structured format for transmitting data across a network. An encapsulation process creates a ‘frame’ when data is prepared to be sent over a network.

This process gives rise to a frame. Frames play an important role in Layer 2, i.e., the Data Link Layer in the Open Systems Interconnection (OSI) model. This model defines how the different network protocols interact.

The efficient transfer of data through a physical network link, like an Ethernet connection, is the responsibility of the data link layer. Before transmitting them over the network, it creates frames out of the data packets from Layer 3 of the network.

Note: If you have been following up with our new CCNA series, you might have come across the concept of configuration management tools. If you haven’t, I recommend you do so before jumping on to this blog.

In this blog, you will learn about the components of an Ethernet frame and how switches receive and forward these Ethernet frames.

What is meant by Ethernet frames?

The basic building blocks of data transfer in Ethernet networks are Ethernet frames. They are made up of a header and a payload that contain the transferred data. The frame length, source and destination MAC addresses, and error-checking information are all included in the header.

The structure of Ethernet frames is clearly defined. The following are the components of an Ethernet frame. These include a preamble, start frame delimiter, destination MAC address, source MAC address, EtherType or Length field, payload, and frame check sequence. You can learn in detail about them here.

The MAC addresses identify the source and destination devices, while the preamble and start frame delimiter connect the receiver with the incoming frame.

How are Ethernet Frames received by the switches?

The following steps are followed for the switch to receive Ethernet frames. These are:

  • Switch operation:

Switches are essential components of contemporary network architecture. A switch determines the correct port for forwarding when an Ethernet frame arrives by looking at the destination MAC address. A forwarding table, often referred to as a MAC address table, is kept up by the switch and is used to associate MAC addresses with certain switch ports.

  • MAC address learning:

Switches use MAC address learning to fill the forwarding tables on their devices. The source MAC address of each frame that a switch receives is extracted and connected to the port on which it originated. Through effective forwarding, the switch creates a database of MAC addresses and the relevant ports.

  • Filtering and forwarding:

A switch can decide how to forward frames once it has discovered a device’s MAC address. A switch only passes a frame to the correct port if the destination MAC address matches an item in its forwarding table when it receives the frame. By removing superfluous traffic, this method increases network effectiveness.

  • Broadcast and multicast handling:

When compared to unicast frames, switches treat broadcast and multicast frames differently. The switch forwards incoming broadcast frames to all associated ports, ensuring that they are received by all network nodes. Similarly to this, ports that have joined the multicast group are only forwarded multicast frames.

How are Ethernet Frames forwarded by the switch?

The following methodologies are followed by a switch to forward the Ethernet frames. These are mentioned below:

  • Unicast forwarding:

When a switch operates in unicast forwarding, it consults its forwarding table to identify the proper port for the destination when it receives an Ethernet frame with a specific MAC address as the target. So that the source and destination devices can communicate directly with one another, the switch then passes the frame directly to that port.

  • Broadcast and multicast forwarding:

A switch copies a broadcast or multicast frame and forwards it to all ports except the incoming port when it receives the message. By ensuring that every device on the network receives the frame, broadcast and multicast communication are effectively enabled.

What is the role of VLANs in Ethernet Frames?

The role of VLANs in Ethernet Frames is discussed below:

  • Basics of VLAN:

An approach to logically divide a physical network into several virtual networks is through the use of virtual local area networks (VLANs). By isolating traffic inside particular groups or departments, VLANs improve security, management, and scalability.

  • VLAN tagging:

Ethernet frames can be marked with VLAN tagging to show which VLAN they belong to. The switch adds a VLAN tag to an Ethernet frame when it enters a switch port that has been set up for VLANs. Switches can properly handle and forward frames with VLAN tags because of this tag, which contains information about the VLAN to which the frame belongs.

  • VLAN trunking:

Multiple VLANs can be carried over a single physical link between switches thanks to VLAN trunking. Trunk ports use unique trunking protocols like IEEE 802.1Q to transmit and receive packets from various VLANs. Network administration is made easier and resource management is made possible via tunneling.

It’s a wrap!

Data frames are an important component for receiving as well as sending data in a network, that too, reliably. We have discussed Ethernet frames in this blog and all the techniques involved in receiving and forwarding Ethernet frames through switches.

The fundamental units of communication in Ethernet networks are Ethernet frames. In order to ensure effective data transfer within local networks, switches are essential for accepting and forwarding these frames. Network administrators and anybody interested in computer networking should understand the design and functionality of Ethernet frames and switches.

What are Configuration Management Tools?

configuration management tools
configuration management tools

In this blog, you will learn about various configuration management tools which is included in CCNA course. Before we dig into the various types of configuration management tools, it is important to understand what is meant by these tools.

Note: If you have been following up with our new CCNA series, you might have come across the concept of wireless network security. If you haven’t, I recommend you do so before jumping on to this blog.

Imagine you are a network/system administrator and you manage hundreds of networks on various devices all by yourself. The main goal of your work will be to make sure that all the devices work by following the same network standards.

To ensure that, you will need a mapping system showing you all the networks, the interconnections, the interdependencies, and who is connected to whom. This is where the configuration management tools come in handy. They show you the complete picture of all the networks you’re monitoring.

So, in this guide, we will learn more about automation tools to ace the CCNA 200-301 exam. You will learn what is meant by configuration management tools, their purpose, and capabilities, why we use them, and the various configuration management tools that are used.

You will get introduced to the characteristics of the following configuration management tools:

  • Ansible
  • Puppet
  • Chef

The above-mentioned automation tools are suitable for any network. However, they are best suited for medium to large networks with thousands of connected devices.

Without further ado, let us now begin learning!

What is Meant By Configuration Management Tools?

A lot of people compare these tools to DevOps, however, DevOps is used to collaborate with people. Configuration management tools, on the other hand, are meant to automate the process of identifying, tracking, and noting down the changes in hardware, software, and devices in a network infrastructure.

In other words, these configuration management tools help to analyze the impact of change in any hardware or software on the whole system. This helps in reducing network disruption.

Therefore, configuration management tools can be defined as network automation tools that allow centralized control of a large number of network devices. Ansible, Puppet and Chef are the three most popular tools that you must be aware of.

Do you know that these tools were not specially built for network automation. However, these came into existence after the rise of virtual machines. Therefore, these tools have been used by system and network administrators to create, configure and remove virtual machines.

These configuration tools are now mostly used in managing network devices and to automate them. Ansible is the most popular configuration management tool of them all!

What are the Uses of the Configuration Management Tools?

These tools can be used to perform the following tasks:

  • These tools can be used to generate configurations for new devices on a very large scale.
  • These can be used to make configuration changes on devices present in a network or on a specific group of devices.
  • These tools can also be used to keep a check on device configurations to know if they function in tune with defined standards.
  • These tools can be used to compare configurations between devices and between various versions of configurations on the same device.

Why Do We Need Configuration Management Tools?

There are two major reasons why we need configuration management tools. These are:

  • Configuration Drift:

  • When we buy a new laptop, we change its wallpaper, font size and even change its configuration settings. This causes a drift/deviation in a device’s standard settings that are defined by a company.
  • This is known as configuration drift.
  • This can lead to future issues.
  • It is best to have standard configuration management practices even without automation tools.

 

  • Configuration Provisioning:

  • The way how configuration changes are applied to a device refers to configuration provisioning.
  • It is done by connecting to devices one-by-one through SSH. This is a traditional method.
  • However, this method is not suitable for large networks.
  • This is where the role of configuration management tools such as Ansible, Puppet, Chef, etc. comes into play.
  • They allow us to make changes to the devices on a large scale within a fraction of the time and effort.

What are the Basic Characteristics of Configuration Management Tools?

Let us now go over the fundamental features of each of the configuration management tools one by one:

Ansible:

  • Ansible is one of the most popular configuration management tools and is owned by Red Hat.
  • It has been coded in Python.
  • It does not need any special software to run on managed devices. Therefore, it is agentless.
  • It makes use of SSH to connect to devices, perform configuration changes and take out information, etc.
  • It follows a push model. The Ansible server pushes the configurations to managed devices.
  • Puppet and chef use a pull model.
  • The following text files have to be created after installing Ansible:
    • Playbooks: These are the overall blueprints of automation tasks. They contain the logic and action of each task. These are coded in YAML.
    • Inventory: These are the files that keep a record of all the devices that are managed by Ansible. These are written in INI, YAML, and many other formats.
    • Templates: These files showcase a device’s configuration files. These are written in  Jinja2 format.
    • Variables: These files contain variables along with their values. These are written in YAML format.

Puppet:

  • It is the second most popular configuration management tool.
  • It has been coded in Ruby.
  • It is agent-based.
  • It needs specific software to be run on managed devices.
  • You must note that not all Cisco devices support a Puppet agent.
  • You can run Puppet without the help of any agent. 
  • The proxy agent runs on an external host. It uses SSH to connect to managed devices.
  • The server of the Puppet management tool is called ‘Puppet Master’.
  • The client pulls the configurations from the Puppet Master. Therefore, Puppet runs on a pull model.
  • It makes use of a proprietary language instead of YAML.
  • The following text files are needed after installing Puppet:
    • Manifest: The desired configuration state of a network device is defined by Manifest.
    • Templates: These are quite similar to the templates of Ansible. These are used to build Manifest.

Chef:

  • Like Puppet, it is also a management tool written in Ruby.
  • It is based on an agent. Therefore, it requires specific software to run on managed devices.
  • The Chef agent is not supported by all Cisco devices.
  • A Domain-Specific Language (DSL) is used by files that are based on the Ruby language.
  • The Chef uses the following text files:
    • Resources: These represent the configuration objects managed by Chef. 
    • Recipes: These represent ‘recipes’ in a cookbook. They consist of all the logic and actions for the task performed on resources.
    • Cookbook: These represent a group of recipes together.

It’s a Wrap!

In this blog, we discussed various configuration management tools such as Ansible, Puppet, and Chef. You learned the difference between the basic characteristics of these tools. 

You also learned why it is important for network devices to stay true to their standard configuration settings.

Happy learning!

The Only Guide You Need to Understand Wireless Network Security

Wireless Network Security
Wireless Network Security

A Network Engineer has to deal with networking devices such as routers, switches, modems, etc. To understand them better, it is crucial to understand the concept of wireless network security. This topic is also very important from the CCNA 200-301 exam’s point of view.

Note: If you have been following up with our new CCNA series, you might have come across the wireless LANs in the computer networks guide. If you haven’t, I recommend you do so if you are new to the concept of wireless networks.

In this blog, we will go through a brief introduction to wireless network security. We will also learn various authentication methods, covering not-so-secure methods to the most secure methods in modern networks. 

Let us now begin with the new concepts!

What is Meant by Wireless Network Security?

When it comes to networks, security is a very important aspect that cannot be compromised. However, it is even more important in wireless networks. Do you know why?

So, it is because the wireless signals are not present in a wire, any device that comes within the range of the signal can receive the traffic. On the other side, wired networks encrypt the traffic when sending it over an untrusted network such as the Internet. 

For example, we usually don’t encrypt the wired traffic within a LAN. 

However, in wireless networks, it is very important to encrypt the traffic between the wireless client and AP. This is why encryption is very important in wireless networks. Therefore, we need to cover the three most important concepts:

  • Authentication
  • Encryption
  • Integrity

We will learn about encryption and integrity in detail in the next blog. So, we will focus on authentication in this blog. 

Introduction to Authentication from Wireless Perspective

As the traffic is sent from a wireless client to the AP, it is very important to authenticate all the clients in a particular wireless network. Therefore, authentication can be described as verifying the identity of a user or device. 

Only trusted users and clients should be given access to the network, especially in a corporate setting. In the case of guest users, a separate SSID can be used to provide access to the corporate network.

The authentication process is not just limited to the AP to authenticate the clients. It is also for the clients to make sure that they do not connect with a malicious AP. A malicious AP can trick users by being an imposter and then, it can carry out attacks such as a Man-in-the-Middle attack.

There are many ways to carry out authentication. Some of them are:

  • Password
  • username/password
  • Digital certificates installed on the devices
authentication process

What are the Various Wireless Authentication Methods?

There are over seven different methods to authenticate. These are:

  • Open authentication
  • WEP (Wired Equivalent Privacy)
  • EAP (Extensible Authentication Protocol)
  • LEAP (Lightweight EAP)
  • EAP-FAST (EAP Flexible Authentication via Secure Tunneling)
  • PEAP (Protected EAP)
  • EAP-TLS (EAP Transport Layer Security)

Let us now cover each one of them in brief. 

Less Secure Methods for Authentication

1. Open Authentication

  • The client first sends an authentication request and the AP accepts it. No questions are asked and there is no need to enter credentials.
  • This is NOT a secure authentication method.
  • The AP accepts all authentication requests.
  • This method is still used today in combination with other authentication methods.
  • Consider connecting to Wi-Fi in a restaurant that does not require you to enter any password to connect to the Wi-Fi. However, after the connection is built, it asks the user to authenticate himself through his user id, etc.

2. Wired Equivalent Privacy (WEP)

  • WEP is not just an authentication method. It provides encryption as well to wireless traffic.
  • It uses the RC4 algorithm for encryption.
  • It requests the sender and receiver to use the same key. Therefore, it works on the ‘shared-key’ protocol to authenticate.
  • WEP keys can be 40-140 bits in length.
  • WEP encryption is NOT secure. It can easily be cracked.

More Secure Methods for Authentication

3. Extensible Authentication Protocol

  • It is an authentication framework that is used by various EAP (LEAP, EAP-FAST, PEAP, EAP-TLS) methods.
  • It defines a standard set of authentication functions.
  • It provides port-based network access control as it is integrated with 802.1X. It is used to limit network access for clients connected to LAN or WAN until they are authenticated.

4. EAP Authentication Methods Used in Wireless LANs

  • 4. LEAP (Lightweight EAP)
  • It has been created by Cisco as a better version of WEP.
  • The clients need to provide a username and password to connect with the AP to authenticate.
  • Mutual authentication is provided by both the client and the server, unlike WEP where only AP sends the authentication message.
  • The keys used are dynamic WEP keys. Therefore, they keep changing.

5. EAP-FAST (EAP Flexible Authentication via Secure Tunneling)

  • It has also been developed by Cisco.
  • It consists of three phases:
    • Protected Access Credential (PAC) is created and sent over to the client via the server.
    • A secure TLS server is created between the user and the authentication server.
    • The server and client then community further for mutual authentication in the TLS server.

6. Protected EAP 

  • PEAP also creates a secure TLS tunnel between the client and server like EAP-FAST.
  • The server consists of a digital certificate instead of a PAC. 
  • This certificate is used to certify the server by the client.

7. EAP-TLS (EAP Transport Layer Security)

  • EAP-TLS requires all the ASs as well as clients to have a certificate.
  • It is the MOST secure wireless authentication method.
  • It is more difficult to implement PEAP  because every client device needs a certificate.
  • The TLS tunnel is still used to exchange encryption key information.

Conclusion

This marks the end of all the important concepts in wireless network security. We have majorly covered all the authentication methods in this blog.

In the upcoming blog, we will cover authentication and integrity. Stay tuned 

Happy learning!

What is JSON and How to Interpret JSON Encoded Data?

what is json?
what is json?

Welcome back to the CCNA series where we cover all the important topics that are asked in the Cisco Certified Network Associate (CCNA 200-301) exam. 

In the previous blog of our CCNA 200-301 series , we talked about the containers in cloud computing. I recommend you go through it before you jump to this blog. 

It is time to move further in our free CCNA series to the next concepts. You are going to learn about JSON which is also known as Data Serialization Language or Data Serialization Format. This language also allows us to format or structure data in a standardized way so that it can also be used to communicate between applications. Other data serialization languages include XML and YAML.

You will also learn how to interpret JSON-encoded data. In this blog, we will cover the following important concepts:

  • Data serialization: What is it and why do we need it?
  • JSON (JavaScript Object Notation): How to interpret JSON basic JSON-encoded data?

Without any further ado, let’s begin learning!

What is Data Serialization?

Data serialization refers to the process of converting data into a standardized format/structure that can be stored in a file or transmitted over a network and rebuilt later (i.e., by a different application). This standardized format makes sure that all the data sources have the same format and labels.

Why is it useful?

It allows the data to be communicated between applications in such a way that it could be understood by both applications. For example, if one application is written in PHP, and the other is written in Python, both languages store data differently. Therefore, they need a standard format to send data to each other.

The data serialization languages such as JSON allow us to represent variables with text. Firstly, you need to understand what is a variable. 

Variables are containers that store values. These could be the values of “IP-address”, “status”, “netmask”, etc.  For example, 

“Status”: up

Here, “status” is variable, and “up” is value. 

How Data is Exchanged Without Data Serialization?

Without the use of any data serialization language/format, if an app is trying to get information from an SDN controller, it sends a GET message to the controller. The server, the controller, sends those variables directly to the client without converting them to a standard format like JSON.

This way, the client doesn’t understand the received data. This is because the app and the controller are written in different languages and they store data differently. This is why they can’t communicate directly.

How Does Data Serialization by JSON Works?

Data serialization is a process that is often supported by many different languages such as Java, PHP, etc. it breaks the object (data) into different formats so that it could be understood by another application. One of the most commonly used data serialization language is JSON. 

It often helps in making communication possible between a client and server. Since both of them understand different languages, JSON acts as a medium.

Whenever a client wants some information from the server, it sends a GET request to the server. The server in turn, sends it back to an API which converts internal variables into JSON format.

These JSON-formatted variables are sent to the client which is then understood by it. As required, the client can convert this data into its internal variables.

Therefore, JSON helps in converting the data into a standardized format that could be understood by any application.

What is meant by JSON?

JSON (JavaScript Object Notation) is a human-readable text language that represents the Java arrays, lists, data, etc. It is also a standard to convert internal variables into a standardized format. It can send data to any client present over the network and it can also save files.

It uses human-friendly text to store and transmit data objects. Therefore, JSON is a data standardization language that could be understood by both humans and machines.

The following are some of the key features of JSON:

  • It is standardized according to the RFC 8259.
  • JSON has been derived from JavaScript. However, it is language-independent and many modern programming languages can create and read JSON data.
  • REST APIs make use of JSON. (We will learn about REST APIs in the upcoming blogs.)
  • Spaces and linebreaks do not matter to JSON.
  • There are four primitive data types in JSON:
    • String: It is a text value. 
    • Number: It is a numeric value.
    • Boolean: it is a data type that has only two possible values.
    • Null: It represents the absence of any object value.
  • There are two structured data types in JSON:
    • Object: It is an unordered list of key-value pairs, i.e., variables.
    • Array: It is a series of values separated by commas. It is NOT key-value pairs.

Let us now cover the structured data types in detail!

What are JSON Structured Data Types?

It is very important to understand the structured data types of JSON to interpret the JSON formatted files. The best part about JSON is that it can be read by humans as well as machines.

As discussed earlier, there are two types of structured data in JSON. Let’s cover them one by one to understand JSON scripts in a better way.

Object:

  • An object is referred to as a jumbled list of key-value pairs (variables) that do not have any order.
  • We use curly brackets ({}) to represent objects.
  • The key is of a string primitive data type.
  • The value can contain any valid JSON data type such as string, numeric, boolean, null, object, or array.
  • A colon (:) is used to separate the key from the value.
  • In case, there are multiple key-value pairs, each pair is separated by a comma.

Example:

{

  “Interface” : “GigabitEthernet1/1”,

  “Is_up” : true,

   “Ipaddress” : “176.134.1.2”,

   “Netmask” : “255.255.255.0”,

   “Speed” : 1000

 }

Note: objects within objects are called ‘nested objects’.

Array:

  • It is a series of ‘vales’ that are separated by commas.
  • It is NOT a key-value pair.
  • The values do not have to be of the same data type.

Example:

“Interfaces” : [

“GigabitEthernet1/1”,

“GigabitEthernet1/2”,

“GigabitEthernet1/3”

],

“random _values” : [

“Hello”,

  57

]

}

You can see that the data type of “interfaces” is different than that of the “random_values”.

I recommend you practice some questions on the Internet to interpret the JSON data. Just looking at these examples won’t help you in understanding the concept easily. 

Conclusion:

In this guide, we have learned about data serialization languages such as JSON. It is critical to understand this concept if you are preparing to take the CCNA interviews and the CCNA 200-301 exam.

The second most important thing is to learn how to interpret the JSON formats. I recommend you practice more questions. 

Happy learning!

What Are Containers In Cloud Computing And How Do They Work?

What Are Containers In Cloud Computing?
What Are Containers In Cloud Computing?

It’s no secret that containers have taken the cloud computing world by storm. The technology has been widely adopted by enterprises of all sizes and has become the preferred way to package and deploy applications.

what's included in a container in Cloud Computing?

So, what exactly are containers and how do they work? In simple terms, a container is a self-contained unit of software that includes everything needed to run an application: the code, runtime, system tools, and libraries.

Containers are isolated from each other and can be run on any server, making them ideal for cloud computing. This blog explains everything you need to know about containers, including their benefits and how they work. You will also learn the difference between Virtual Machines and containers.

Stay tuned till the end to learn the best!

What are Containers?

Containers are software packages that contain an App along with all the other dependencies such as binaries and libraries for the contained app to run. As mentioned earlier, they are a self-contained unit of software.

What are Containers?

Note that multiple apps can run in a single container although we do not make use of it. You can assume that one container means one app.

The following are the key features of the containers in cloud computing:

  • Containers run on a container engine such as Docker engine, which is the most popular one.
  • This container engine runs on a host Operating System such as Linux on the hardware.
  • Since containers are small in size, light-weight and include only the dependencies, they do not need to run an OS in each container. Virtual Machines, on the other hands, do need OSs.
  • The major difference between VMs and containers is that VMs run an OS in each VM, whereas containers don’t.
  • Due to this major difference, there is a huge difference in the costs and benefits of containers and VMs.
  • A software platform that automates the deployment, management and scaling, etc. of the containers is called a Container Orchestration.
  • For example, you must have heard about Kubernetes, which is the most popular container orchestrator. Docker also has one container orchestrator called Docker Swarm.

What’s the Need for Container Orchestrator?

In small numbers, manual operation of containers is possible. However, when it comes to large-scale systems such as involving Microservices, we may require thousands of containers. That many containers cannot be managed manually in real-time.

So, instead of one app, you might have hundreds of different microservices running together to form the larger solution. 

Note: Microservices architecture is an approach to software architecture that divides a larger solution into smaller parts (microservices). These microservices run in containers that can be orchestrated by Kubernetes or another platform.

What’s the Difference between Containers and Virtual Machines?

Virtual Machines (VMs)

Containers

Each VM runs its own OS. Therefore, it can take minutes to boot up.

Containers can boot up in milliseconds.

VMs occupy more space in gigabytes in the disk.

Containers do not take much disck space. They take space in megabytes.

VMs are portable and can move between physical systems running the same hypervisor.

Containers are more portable. They are smaller in size, boot up in less time. Docker containers can be run on any container service.

VMs are separated as each VM runs its own OS.

Containers are less separated since they all run on same OS.

If one OS crashes, other VMs are not affected.

If one OS crashes, all the containers crash.

Even though there is a major shift towards the use containers especially due to adoption of automation and DevOps, VMs are still widely used today and will continue to be used.

How do Containers Work?

Containers are built using containerization engines such as Docker, Kubernetes, etc. The containerization engine is responsible for creating, deploying, and managing containers. 

When a developer creates a container, it contains all the necessary components required to run the application. This invovles the application code, runtime, system tools, along with the libraries. The container runs independently of other containers on the same server or cloud provider. 

Containers use a host operating system, which means that they share the same kernel as the host. This results in faster startup times and lower resource consumption. When a container is started, it runs in its own isolated environment, with access to the resources it needs to run the application. This ensures that the container does not interfere with other containers or the host system. 

Benefits of Containers in Cloud Computing

1. Portability: 

Containers are highly portable and can run on any server, cloud provider, or local environment without any issues. 

2. Efficiency: 

Containers are lightweight and consume fewer resources compared to traditional virtual machines. This makes them perfect for deploying applications at scale. 

3. Isolation: 

Containers are isolated from each other, which helps in avoiding conflicts between different applications. 

4. Consistency: 

Containers ensure that the application runs in the same environment regardless of where it is deployed. This eliminates the risk of any unexpected behavior due to environmental differences. 

5. Fast deployment: 

Containers enable fast deployment of applications, as they can be easily created, started, and stopped within seconds.

Conclusion

In conclusion, containers are a game-changer in the world of cloud computing. They offer numerous benefits over traditional virtual machines, including portability, efficiency, isolation, consistency, and fast deployment. 

By understanding how containers in cloud computing work, developers can use this technology to build and deploy applications more efficiently. In fact, it is also helpful for Network Engineers or DevOps Engineers for automation and programmability.

Keep learning!

Deployment Models of Cloud Computing with Examples

Deployment Models of Cloud Computing
Deployment Models of Cloud Computing

What’s the first thing that pops up in your mind when you hear the word ‘cloud’? A lot of people assume that cloud computing refers to public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Although it is true that it is the most common deployment model of cloud computing, it isn’t the only one!

In this guide, we will discuss the four deployment models of cloud computing.

Before you head on to the deployment models, it is important to understand the different service models of cloud computing.

The most common deployment model is the ‘public cloud’ model such as that of AWS. In fact, big MNCs like eBay, Apple, Fitbit, and Netflix are using cloud computing to run their business models smoothly. One example of the success of deployment models in cloud computing is how Netflix uses cloud computing to scale up its infrastructure and cut down its operational costs at the same time!

Stay tuned till the end of the blog to learn about the deployment models of cloud computing in detail. This is also useful to know if you are preparing to take the CCNA certification exam.

The Four Deployment Models of Cloud Computing

The following are the four deployment models of cloud computing:

  • Private cloud
  • Community cloud
  • Public cloud
  • Hybrid cloud

Let us now discuss each one of the four deployment models.

1. Private Cloud

According to the National Institute of Standards and Technology (NIST), a private cloud is explained as a cloud infrastructure that is provided for exclusive use by single organizations that consist of multiple consumers such as business units. 

It could be owned, managed and operated by the organization, a third party or both. The following are the key features of the private cloud:

  • The private cloud is mostly used by large enterprises or government organizations.
  • Although the cloud is private, it can still be owned by a third party.
  • For example, although AWS is known as a public cloud offering, it also provides a private cloud service called the Amazon Virtual Private Cloud or Amazon VPC. Red Hat Inc. uses Amazon VPC.
  • Private clouds can be on or off premises. Many people think that the private cloud and on-premises are two different things, but it is not always true.
  • The same kind of services such as SaaS, Paas, IaaS, etc. is offered in the private cloud as offered in the public cloud.
  • The only thing that is reserved is the infrastructure for a single organization’s use.

2. Community Cloud

The community cloud provides cloud infrastructure that is reserved for a community of consumers from organizations that have shared concerns such as security requirements, policies, and compliance considerations. 

It could be owned, managed, and operated by one or more of the organizations in the community or even by a third party.

The key features of the community cloud are as follows:

  • This is the least used cloud deployment.
  • It is quite similar to the private cloud but the infrastructure is reserved to be used by a group of organizations holding the same set of interests.

For example, Walmart uses the Salesforce community cloud for its customer experience.

3. Public Cloud

Let us now explore the most popular cloud deployment model, the public cloud. This cloud infrastructure is open to be used by the general public. 

It can be owned, managed, and operated by an MNC, government organization, academic organization, or a combination of them. 

The highlighting pointers of the public cloud are as follows:

  • It can be accessed on the platform of the cloud provider.
  • This is the majorly used cloud deployment as compared to others.
  • Some of the most popular public cloud providers are as follows:
    • Amazon Web Services (AWS)
    • Microsoft Azure
    • GCP (Google Cloud Platform)
    • OCI (Oracle Cloud Infrastructure)
    • IBM Cloud 
    • Alibaba Cloud
  • AWS is the leading public cloud computing service provider for a long period of time.

4. Hybrid Cloud

A hybrid cloud can be referred to as a cloud infrastructure that is composed of two or more unique cloud infrastructures such as private, public, or community. These exist individually but are bound together by a company-owned technology that allows data and application portability. 

A good example of it is cloud bursting for load balancing between clouds. It is a configuration set up between a private and a public cloud. 

The hybrid cloud is the most popular choice of deployment models among streaming services such as Netflix and Hulu due to the sudden spikes in bandwidth demand. In such cases, the hybrid cloud is the best choice to offload these bandwidths. Other companies include Airbnb and Uber.

Hybrid Cloud

The key features of the hybrid deployment model are:

  • It is a combination of either of the three cloud deployment models, namely, public, private, or community cloud.
  • For example, a private cloud can offload to a public cloud when needed due to resource restrictions.

Can We Expect New Cloud Services in the Industry?

In the near future, we can expect generative AI cloud services in the market. A California-based MNC called Nvidia has already taken the first step. 

It has introduced a new set of cloud services that will allow businesses to create and use their own generated AI models. These models will be customized according to their own proprietary data and needs.

This cloud service will be very helpful to enterprises as they can build their own applications for their own needs such as customer support, intelligent chat, digital simulation, and even professional content creation.

The cloud computing industry is the future, for sure!

Conclusion

Nothing can beat the advances in the cloud industry. This is evident from the recent developments in the industry. It is very important to understand the deployment models of cloud computing if you want to learn the basics of cloud computing.

This guide is really helpful to those who want to take the CCNA 200-301 exam or even who are curious to learn more about cloud computing.

Stay tuned for more blogs for the CCNA 200-301 series!

Keep learning!

A Comprehensive Guide To The Different Service Models Of Cloud Computing

Service Models Of Cloud Computing
Service Models Of Cloud Computing

Cloud computing is a technology that enables organizations to access data and applications over the Internet. This technology has revolutionized the way businesses operate and has created new opportunities for organizations of all sizes. Cloud computing has taken over the mainstream IT industry and has become an increasingly popular option for businesses of all sizes. It’s flexible, scalable, and can be a cost-effective way to power your business. There are so many different service models of cloud computing to choose from.

There are four different service models of cloud computing, which are:

  1. Infrastructure as a Service (IaaS)
  2. Platform as a Service (PaaS)
  3. Software as a Service (SaaS)
  4. Function as a Service (FaaS)

Each of these service models has its own advantages and disadvantages. In this blog, we will take a comprehensive look at each of the different service models of cloud computing.

In the previous blog of our CCNA 200-301 series , we talked about the concept of virtualization and cloud computing. I recommend you go through it before you jump to this blog. 

Keep reading to learn more!

Various Service Models of Cloud Computing:

1. Infrastructure as a Service (IaaS)

  • Infrastructure as a Service (IaaS) is the most basic and foundational cloud computing service model. 
  • With IaaS, the cloud provider offers virtualized computing resources, including servers, storage, and networking equipment. 
  • It provides GUI and API-based access.
  • Users can rent these resources and have complete control over them, installing and running their choice of an operating system, software, and applications. 
  • IaaS is ideal for organizations that need complete control over their computing resources, have complex applications that require significant customization, or need to handle sudden spikes in usage. 

Advantages of IaaS: 

  • Allows for complete control and customization of infrastructure.
  • Typically offers high scalability and flexibility.
  • Lower upfront costs compared to on-premise infrastructure.

Disadvantages of IaaS: 

2. Platform as a Service (PaaS) 

  • Platform as a Service (PaaS) is a cloud computing model that offers a platform for application development, deployment, and management. 
  • With PaaS, the cloud provider provides a preconfigured platform that includes operating systems, database systems, and development tools. 
  • Users can focus on developing applications rather than worrying about infrastructure and management. 
  • It has the ability to auto-scale.
  • PaaS is ideal for organizations that want to develop, test, and deploy applications quickly, without the need for extensive infrastructure management. 

Advantages of PaaS: 

  • Allows for accelerated application development and deployment – 
  • Offers pre-configured development tools and platforms – 
  • Lower ongoing costs compared to IaaS 

Disadvantages of PaaS: 

  • Limited control over infrastructure and development tools.
  • May require customization to meet specific needs or requirements.
  • The updates occur themselves. The user does not need to update anything.
  • You can access it over the Internet.
  • Scaling limitations may be present Examples of PaaS providers include Heroku, IBM Cloud, and Oracle Cloud Platform. 

3. Software as a Service (SaaS)

  •  Software as a Service (SaaS) is a cloud computing model in which the cloud provider offers on-demand access to software applications over the internet. 
  • With SaaS, the provider manages the infrastructure, software, and data, while users interact with the software via a web browser or mobile app. 
  • SaaS is ideal for organizations that want to minimize IT resource requirements, reduce upfront costs, and have access to software on a subscription basis.
  • Examples of SaaS providers include Salesforce, Dropbox, and Microsoft Office 365. 

Advantages of SaaS: 

  • Low upfront costs as it follows the pay-as-you-go model.
  • Easy to deploy and manage with little to no IT resources.
  • Automatic scaling with no infrastructure management required.

Disadvantages of SaaS: 

  • Limited customization options.
  • May require internet connectivity to access software.
  • May not integrate with existing software tools or workflows.

4. Function as a Service (FaaS):

  • Function as a Service (FaaS) is a cloud computing model that allows developers to create and deploy small, self-contained pieces of code as functions. 
  • With FaaS, the cloud provider manages the infrastructure, scaling, and availability, while developers focus on writing the code. 
  • FaaS is ideal for organizations that want to create microservices, small applications, or automate specific tasks within larger applications.
  • Some of the examples of FaaS providers are AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions.

Advantages of FaaS: 

  • Extreme scalability and flexibility.
  • Cost-effective, pay-per-use pricing model.
  • Simplifies development and deployment of small, modular functions.

Disadvantages of FaaS: 

  • Limited programming language support compared to other service models.
  • Limited runtime environments and execution times compared to other service models.
  • High latency and cold start times may be present.

Conclusion

Cloud computing has transformed how organizations operate, providing a range of service models to meet various business needs. Whether you need complete infrastructure control or want to develop and deploy applications quickly, cloud computing can offer significant benefits. 

Understanding the different service models of cloud computing can help you take the CCNA 200-301 exam. Understanding the service models of cloud computing is very crucial to have a basic understanding of virtualization and cloud computing.

Stay tuned for upcoming blogs on deployment modes in cloud computing in CCNA Series.