Edge & the datacentre: scope considerations for developers

This is a guest post for the Computer Weekly Developer Network written by Chris Carreiro in his capacity as CTO of ParkView at Park Place Technologies — the company is a specialist in datacentre maintenance with global staff on every continent.

Carreiro says that he thinks the IT trade press has painted a picture of an ongoing power struggle between the cloud and the edge, with one or the other due to claim supremacy.

He argues that the IoT-driven data volume explosion, our mobile lifestyles and a plethora of forthcoming data-intensive, low-latency technologies (such as augmented reality) demand that traditional compute, storage, network and compute-accelerator resources be moved closer to the end user — i.e. to the edge.

But, as we know, any wholesale replacement of cloud technologies is unlikely.

Instead he suggests, we are entering a period of ‘forced’ distributed architecture across different levels of ‘edginess’, which will complement centralised capabilities in the cloud and/or enterprise datacentre. Carreiro writes as follows from here…

The hierarchy of edge levels

It is important to note that the edge is not a specific location; it is shorthand for any relocation away from the datacentre when data is processed closer to the end user and there are many different levels of edge which will differ by industry.

A hierarchy of edge levels might be regional, neighborhood, street-level and building-level nodes in a smart city, for example. The levels would be different for consumer mobile technology, autonomous vehicles and so on. Micro-datacentres sprinkled around various corporate facilities could be advantageous in certain cases. In broad terms, there will be a ‘spectrum of edge’ spanning the device level to the gateway toward the cloud.

Practice for the perfect shift

To optimise applications for edge computing, it will be vital to take care regarding where objects are instantiated and in which machine (physical or virtual) memory is allocated. Relocating parts of the business logic processes from monolithic applications hosted on a central server in the datacentre to the edge can potentially raise scope resolution issues or, more likely, novel granularity control problems.

If interdependent objects that would have previously occurred on the same virtual or physical server in the datacentre are now separated, one at the edge and one in the datacentre, there might be no pathway for these objects to reference one another or access a shared memory space. Additionally, when declaring variables and objects in this distributed environment, developers will need to consider which process and in what namespace an object is descended from in order to properly address issues like object persistence, data integrity and read/write permissions.

Control cost, don’t let cost control you

There are costs associated with moving data, pushing data up to process in a central datacentre and sending it back to the edge for the user. It will be essential to ensure wherever an object/variable is instantiated or a memory declaration is made doesn’t necessitate a trip across the network to carry out its mission. Without careful consideration of where instances exist as part of business process and application design, there is no point in moving processes to the edge, as doing so with a traditional application architecture could easily multiply the workload across the network as edge processes constantly refer back to the centralised servers.

In some ways, efficient development skill in the edge will depart from current practice. Having fewer lines of code isn’t the measure of success. Building an application with fewer variables or shared memory might result in a less bulky application, but hinder efficiency if moved to the edge.

After the balancing act, who is responsible?

The move to the edge can raise important security and compliance problems as well.

For instance, a particular business process that previously existed in the datacentre may require administrator access to function. When that process is decoupled from the cloud and sent to the edge, what security profile will it be using? The process may still require administrator-level permissions, but it is no longer safely stored inside the relative security of the data center.

It’s now outside the walls and in the wild.

In a simplified example, a retailer might currently host a secure process in the cloud, where it is protected by physical measures, such as biometric access control on the datacentre, as well as extensive network security. Moving that process to the edge – such as to a closet at the back of a retail location – would represent a substantial change in how ‘locked down’ it is.

The various edge-related issues covered above can more easily be considered during greenfield implementations, where applications can be optimised from the ground up for the edge model. Unfortunately, as edge computing takes hold, most developers will not enjoy the benefit of starting fresh. They will be charged with bending and extending existing business processes designed for older, monolithic technologies into this new distributed topology.

There is no free lunch.

Variable declaration, object instantiation, memory allocation and security profile issues must be resolved as such applications are reconfigured to make the jump to the edge.

Carreiro: there are a ‘spectrum of edge’ spanning the device level to the gateway toward the cloud.