What is edge computing?

May 4, 2020 By 0 Comments

Edge computing is a fantasy! Yet another cutting-edge concept that has not been commercially delivered, or has it?

There needs to be a bit of jargon busting. The meaning of “edge” changes depending upon who you speak to. What is edge computing?

The sales speak is edge technology puts the processing power right on your hardware. With edge computing you can collect and analyse data on the device itself. You can then make changes to your controller or send alerts without needing tonnes of bandwidth.

What is the definition of edge?

The operational definition is “edge” just means closer to your device than a central data server.

Two quite different definitions and two different ways of interpreting the technology and potential applications.

There is no question that edge computing, when available will be a breakthrough. But is the hype around edge computing deserved, or is it misplaced?

So, let’s talk about a typical IoT architecture, the potential issues and how edge computing may be an answer.

Number one: A device is measuring the vibration of a motor in a dairy bottling facility.

In the first example, data is sampled on average every five minutes. The data is then passed over a Wi-Fi connection to a cloud hosted server. The server has multiple such devices relaying information. The requirement is to send an alert if the vibration profile “peaks”. The alert is passed to a server to send an SMS.

Such an application has minimum processing demand and should be relatively inexpensive.

Number two: A camera is sending surveillance data for facial recognition.

There are several stages of processing required to make the pixels captured by a camera visible on a video display.

The frames need to be captured, compressed and transmitted. Typical latencies are low (In 1080p @ 30fps, one-line latency takes 0.030ms) so depending upon the network the data would reach the server in 1 second.

Facial recognition software can process 75 million images in just one-tenth of a second. Processing multiple camera frames takes 5-10 seconds.

These solutions are more expensive in order to overcome latency.

What is latency?

Latency is the delay before a transfer of data begins following an instruction for its transfer.

In the first example, the low sampling rate and low probability of failure means that the time delay in receiving data, analysing data and contacting a separate server is not an issue.  Any latency in the data transfer will not be noticed.

In the second example, the data is the processed, identifies and issue and sends an alert to an SMS or mail server. This can add 5 seconds.

Within 15 seconds a criminal or offender can be 25 metres from where they were spotted.  Latency here is an issue.

Bringing computation closer to the device (if not on the device itself) will reduce latency and for certain applications that is beneficial.

Edge versus the cloud

Traditional cloud computing models use centralised data centres, data centres are usually in centralised locations in each region. The edge, unlike the title would have you believe, can be anywhere closer to the device than a traditional cloud data centre.  Whatever the approach, edge computing issued to minimise latency to improve user experience.

However, latency should not be the only driver when considering edge. The UK is a great example of inconsistent network coverage both between technologies and operators. In fact, poor network coverage has mothballed more business cases than the anything else.

With edge, assuming you can process data on the device, you should not need to send enormous volumes of data over the network. This opens more options on what data is transmitted and how it is transmitted.

What is big data?

Companies generate large amount of data to aspire to be a big data driven company.  The truth is that 90% of the data generated is useless and the remaining 10% useful data is often structured in a way that renders it expensive to manipulate.

The amount of data being generated costs money and if you don’t have to transmit it you don’t have to pay for it. Edge computing provides a way to extract value from data where it is generated, never moving it beyond the edge. If necessary, the data can be pruned down to a subset that is more economical to send to the cloud for storage or further analysis.

Do cloud and edge work together?

Amazon now has “AWS Outpost” – a fully assembled rack of compute and storage that mimics the hardware design of Amazon’s own data centres. It is installed in a customer’s own data centre and monitored, maintained, and upgraded by Amazon This new way of working makes the edge the cloud…almost.

Do we need to centralise or decentralise?

While some applications are best run on-premises, in many cases application owners would like to reap the benefits of edge computing without having to support any on-premises footprint. This requires access to a new kind of infrastructure, something that looks a lot like the cloud but is much more geographically distributed than the few dozen hyperscale data centres that comprise the cloud today.

The direction of travel is to have computational power balanced with the cost of transmission of data and ideal latency. This has repercussions for software and platform development which can work across a centralised data centre and local computational capabilities.

What is edge computing?

Now that the resources to support edge computing are emerging, edge-oriented thinking will become more prevalent among those who design and support applications. Following an era in which the defining trend was centralization in a small number of cloud data centres, there is now a push to decentralised thinking.

Edge computing is still in the very early stages, but cloud computing is less than 15 years old. This means that edge or hybrid technology may be changing the landscape in the next 5 years.

 

 

 

Note: Diagram reproduced from Professor Kevin Curran