Homeworking - the new norm - is there any other option...

In the last blog, I touched on the lasting effects of the COVID-19 pandemic; the obvious fallout here is the massive increase in home/remote working, something I’ve been practicing since 1989 – yes, that long!!!!!

The Internet-connected world has never experienced ramp-up in quite such a fashion before and it showed. Even where an IT solution – maybe an application, or a security authentication mechanism – has been designed for mass, global usage, a sudden ramp-up in users can cause outages, either local or total. One high-profile casualty early during the lock-down period was Microsoft’s Teams unified communication and collaboration platform, so it doesn’t matter how big the company. Other problems were frequently reported such as cloud-based Internet security authentication failures and virtual desktop sessions being unavailable Even networking giants within the IT-world itself (begins with a “C”) reported massive increases in the number of help desk tickets being generated as a result of the work force shifting to homeworking almost overnight. Maybe they should look at acquiring such technology themselves; the artist formerly known as Helpdesk – now Service Desk or Service Management – is surely the next “overnight success”, as in only 30+ years in the making. I’ve just been doing some with a client in that sphere, who spans that length of time – Richmond Systems – and whose (admittedly bang up to date software tech) solution is more relevant than it’s ever been.

But, such outage examples – regardless of the extremity of the situation – are not inevitable as many assume; they are absolutely avoidable, but rarely resolved by the application provider themselves. I’ve been working with vendors and technologies for decades that well and truly resolve these problems – simply that they are not being deployed sufficiently or correctly. A great example is Kemp Technologies, for whom I carried out a global application delivery control test last year on its AX (Application Experience) technology, and for whom I recently delivered a paper on the very subject being discussed in this blog:

https://kemptechnologies.com/resource-library/industry-research/

Here’s a technology routed in Load-Balancing/Application Delivery Control that has moved through the virtual gears, so it can effectively scale infinitely (allowing for intergalactic Internet extensions – a combination of 5G and WiFi-6, or maybe 555G and WiFi-666?) to optimise – and secure (see previous blog)!!! – application and data delivery, regardless of demand, not only from anywhere to anywhere – but.. it can also be managed from anywhere. Such ss South Devon, IT hotbed of the universe, and powered by cream team manufacturing by-product…

The reality is that some folks who are new to homeworking, who swore they could never cope with the distractions of working from home (until they realise the mortgage still has to be paid), despite sitting in traffic for several hours a day, night actually PREFER it! Reports from the national media are also suggesting that the situation in terms of office versus homeworkers won’t go back to how it was, with many employees – for companies who have sent all staff home – already starting to question why they had to go in to the office in the first place. Looking beyond business and industry, education is another obvious candidate for more home-based connectivity being a long-term option. However, such changes in usage patterns create more and different problems for service providers and Data Centre management. In Italy for example, during the nationwide quarantine, peak Internet traffic went up by over 30% and usage overall increased by around 70%. Moreover, from a traffic management perspective, usage patterns shifted, so peak traffic was occurring earlier in the day in impacted regions. The change in bandwidth peaks and troughs was further impacted by national school suspensions, meaning housebound schoolchildren were competing with workers for data and application access and Internet bandwidth in general.

Key to application and data availability however is primarily at the Data Centre or wherever those applications and data reside. Server overload – whether CPU, memory, disk access or network access – is not a new issue, but it is still the primary cause of unavailability and user frustration and, more importantly, loss of productivity. And, as noted previously, it is – in 99.999% of circumstances – completely avoidable. At the same time – see previous blog, it’s that “budget” word again – these solutions have to be available at a realistic cost  As the IT world increasingly transitions from a fixed CapEx model to an OpEx based budget, having a range of licensing options, such as perpetual, subscription-based or metered (based on usage/data throughput or VM/container instances, for example) is fundamental in allowing the tech to go to as many good homes as possible; tech, that is, that started life as (very) expensive, hardware-based product, with absolute performance and scaling limitations. But times have changed, and so has that technology.

In conclusion, the COVID-19 pandemic has resulted in mass deployment of remote and homeworking and significant pressure on supporting 24×7 access to critical applications and data. But what has come out of necessity might now become the new norm across many industries, as most key indicators are suggesting. The ability, therefore, of IT infrastructure to support that remote workplace shift, both at the endpoint and – critically – at the Data Centre, is not simply beneficial, but a “must have”.

And, as noted here, that technology does exist, and at an affordable price-point. So, all of you application, cloud and managed serviced providers out there – no excuses for downtime, ok?