Understanding gRPC- The Modern RPC Framework
Today there is a lot of buzz going on in the tech world regarding gRPC. RPC or Remote Procedure Call system is used to let microservices communicate with other efficiently and quickly at a lightning speed.
Now, this seems to be quite interesting, right?
So without much ado, let’s take a look at what gRPC is all about!
gRPC is a platform that has been developed by Google and in Feb 2015 it has been announced to be open source. gRPC is a short term which is used in place of gRPC remote Procedure Call. gRPC protocol and the data serialization are the two parts which constitute gRPC. For serialization, RPC makes use of Protobuf on default basis but, you can use gRPC with any form of serialization as per your requirements and needs.
You will find here http2 based protocol which is capable to utilize all its benefits. The earlier design of http2 was formed by Google while they were trying to develop SPDY.
http2 has been instrumental in letting gRPC have a number of built in features like persistent single TCP connections, compressing headers as well as cancellation and timeout contracts between server and client.
On data frames, the protocol has flow control from http2 which is built-in. Just to make sure the efficiency of your system is appreciated by clients, this comes to your rescue. However, when you are trying to diagnose issues in your infrastructure, it just adds up the complexity as the server or client can set their own values for control flows.
Normally the client takes care of load balancing and based on the list offered by Load Balancing server, it chooses the server for the given request. The health of the endpoints is monitored by the LB server and to manage the list which has been provided to the clients, it makes use of this and other factors. Just understand that a simple algorithm like round-robin will be used by the client internally but, more of a complex logic will be used by the LB server for building the list of the client.
Different Types of gRPC
For client server communication gRPC is available in two types.
Unary are more of synchronous requests that are made to the gRPC server using single request where until a response is received it blocks others.
This type of gRPC is quite powerful and it can be out to be used through three different configurations:
- Server pushing messages to a stream
- Client pushing messages to a stream
In case of the third option, data can be sent in two streams by the server and client in the same above method. Whatever be the option, RPC method will be initiated by the client. Until the stream gets completed no acknowledgement receipt will be provided by streams and in cases when the system needs to handle network partitions or node failures, it can end up adding complexity. The impact can be reduced by making use of bidirectional stream in order to return ACKs. A message can be returned pointing towards the last received message if the server is offered the change to kill the connection.
Protobuf is nothing but default serialization format used for data which is sent between servers and clients. Like other data interchange methods, Protobuf drops the idea of zero-copy of data and rather go for byte encoding and decoding. So this makes your data smaller but, for the same, you will need to have a CPU dedicated for encoding and decoding messages. Protobuf, unlike other serialization formats like XML or JSON, offers strongly-typed fields in an encoded binary format so as to remove the overhead of encoding, this way it can easily move ahead in a predictable manner.
The Protobuf File
How the messages will be interpreted is defined by Protobuf and then, it lets the developers to come up with stubs. These stubs simplify encoding and decoding of values and make them efficient.
You need to understand some behaviors before getting to know the mechanizations of encoding and decoding data. Each and every protobuf encoder/decoder should be able to set defaults for fields which it cannot find and skip fields which it has no knowledge of. All the decoders should be able to predict the fields that are out of order even though encoders will have all the fields written in the order. Again based on the definition of the field, the decoders will merge or concatenate the fields which are found to be duplicated.
All the fields will come with a field number and along with that it will have wire type. The information in the wire type will decide how to decode the message in the field. This will be then followed by the data which it contains at the first place. Based on the field type, the decoding strategies will change too.
If you take a look at this in the binary format, it would be like:
Field Number- Wire Type – Data
Now in here the field number is a varint and the last three bits after the field number will be occupied by wire type. Based on the wire type, after the field label, other field information will also be included. In case the wire type is strings then to define the length of the string, another varint will be used.
Throughout the protobuf, all the fields will follow this plan. The encoder can make use of this method and can easily move through the fields. This way it will be able to get the information to skip the field or decode it. As the wire type will be carried at the first byte of the field, it will be able to quickly move to the next field as the decoder will let it know that this is not the field it is looking for.
gRPC and Protobuf
These two entities are independent of each other. Even though at first you cannot see any encoding method available for gRPC, you will be able to change the encoding method theoretically. The protobuf stub generation tool named protoc is said to perform all the automatic client server side code generation. This way you can change the encoding but your ability to come up code stubs for client and server based on ten different languages will be gone.
Stubbing and Backward Compatibility
Into the encoding scheme, protobuf is trying to add in safeguards. Theoretically, the stubs which are generated by the protobuf should be compatible enough in the backward direction. One of the reasons for the removed “optional” and “required” fields is this compatibility requirement.
When it comes to microservices, some people are still hesitant to share stubs. Even when protobuf has come up with conventions which help the not updated services to have backward compatibility; developers are still to be sure about using and following them. There will be developers who will break your application by changing the field name 3 from a uint32 direct to a string.
Being Cloud Native
One of the key components of microservice architecture is the chosen language used for interservice communication. This means when you are choosing a language you need to be sure that it meets some very important basic goals. These goals can be like being resilient to changing environment, quick to develop and to perform well in your application and on the wire. gRPC meets these goals seamlessly and so lets you have an architecture which is easily maintainable and agile. So when it comes to Cloud Native, it seems to be the right choice.
So, should you be using gRPC or not?
When using any new architecture, something that we need to be sure is that it is properly evaluated and has a strong community to back. gRPC has both the conditions fulfilled. Today gRPC has been adopted by some of the top names in the market like CoreOS, Netfix and Square. Getting onboard with the protocol quickly should be something that you also want. This can be done directly with code generation but there may arise situations where understanding the technicalities of http2 will become important for engineers. For people working on large teams, code generation and backwards-compatibility that comes with protobuf are the something to look forward too.
Subscribe to our Newsletter
Subscribe to our newsletter to stay ahead of your competitors in the ever-evolving world of technology.
Thank You for subscribing!
Please check your inbox for a confirmation email.