- Rajiv Anand
The emergence of cloud as computing infrastructure has powered a transformation of the new internet. However, we tend to ignore the other massive computing transformation that has seen tremendous growth on the edge. Computing infrastructure that powers sensors, gateways, and edge processing are growing at the rate of millions of devices every day.
What is edge computing?
In recent years, sensors have become more prolific in manufacturing facilities, businesses, and even in your cars. A vast amount of data is being generated at this “edge” of computing networks. Traditionally, all of this data is relayed back to a central server in the cloud. The server then may need to send instructions back to devices after incoming data is processed and business-relevant information is deciphered.
The first problem with this approach is of course the reliability of the network. We all still struggle with dropped calls and disconnects even with ubiquitous connectivity these days. If the system depended on a quick and reliable reaction to sensed data, this could be trouble. For example, your manufacturing line cannot be stopped because of broken internet connectivity.
The second trouble is network latency. For example, If a deer runs in front of your autonomous car, the processing has to be done instantaneously to apply brakes. Even with the fastest connectivity to the data center, this could be a matter of life and death. A less severe but important example is automated shipping, where this could be a wrong item shipped to a client.
Edge computing offers a solution to these issues, by bringing necessary computing infrastructure close to where those sensors are. Edge devices, gateways, and sensor aware software can gather and process data in real-time, allowing them to respond faster, reliably and more effectively.
Best of both
Obviously, while cloud provides huge scalability of computing infrastructure it is important to consider computing resources at the edge into a solution deployment. As IoT devices have become more widespread, organizations need to implement effective edge computing architectures to create a scalable solution. It’s not an either/or question. Enterprises can maximize the potential with careful design of edge and cloud computing (a network infrastructure sometimes called Fog Computing). Considerations have to be carefully given on the balance that affects how much data goes back and forth between the two. Depending on the scale this can mean a local data center.
People, sensors, and machines are expected to generate a staggering 800 zettabytes of data. A massive share of this data, by nature, is temporal or duplicate. By carefully designing the architecture, only business-relevant data needs to be move between the two infrastructure, therefore avoiding unnecessary storage costs.
Today specific business systems may exist solely on the cloud or solely on-premises. The future of network and computing infrastructure, however, is likely to be found somewhere between the edge and in the cloud