natali_mis - stock.adobe.com
IT talent trap: How cloud skills suffer as a result of supplier lock-in
Even when supplier partners dominate, providers can keep some independence by nurturing in-house skills
Supplier lock-in, especially at hyperscale level, is a fact of life in the cloud that can mean in-house skills wither on the vine. However, cloud migrations can be managed in ways that help transfer key skills to in-house staff.
According to industry players like Rob Reid, technical evangelist at Cockroach Labs, some reliance on external parties or partners goes with the territory in most cases.
“In my experience, whenever on-prem and cloud expertise have been required, outsourcing has always been required. Even up to the point where clouds really took off, most people had skills in one or the other, but not both,” Reid points out.
Unless the in-house team is part of a migration from start to finish, including planning, build and cutover, they will miss the opportunity to gain critical knowledge of the target cloud or clouds.
Hyperscalers do things sufficiently differently that simply switching clouds and being innately proficient in the other cloud is not feasible without effort, Reid notes.
Contract negotiation can also offer a fertile ground for achieving skills and knowledge transfer – although the supplier, knowing there’s a degree of lock-in, can have more power in the relationship.
For that reason, Reid says, it can make sense to go multicloud, whether running across multiple clouds or simply to have a cloud exit strategy with more leverage.
Reid also advises choosing cloud-agnostic tools where possible. “This is especially important for your databases, as these are often the most difficult component to move when required,” he says.
“If you’re trying to move 100TB [terabytes] of data out of a cloud-specific database, that lengthens your cloud migration process enormously and makes it more risky. Whereas if you’ve got a cloud-agnostic database, you can simply perform what we call a stretch migration, spinning up new nodes in target infrastructure.”
Spot instance management
For instance, Amazon Web Services’ (AWS) auto-scaling groups mean 80% CPU utilisation might trigger a new node to join the pool, balancing out utilisation of all the nodes. As that utilisation comes down again, the node simply “pops off”, returning to high utilisation across remaining nodes.
Although it’s becoming more commonplace to see people with professional qualifications that span multiple clouds, how deep is that expertise?
Unless you are a huge company with multiple specialists, you will likely always need some level of assistance from partners and suppliers as complex cloud tech continues to evolve, including for those confidently running large enterprise apps across multiple clouds.
“Past experience can fall by the wayside,” says Reid.
David Walker, EMEA field chief technology officer (CTO) at database company Yugabyte, notes that supplier training credits can often be negotiated. At Yugabyte, staff support three databases and three platforms, so it frequently hires across the hyperscalers.
“Most individuals choose to align themselves up and down the stack, learning seven Amazon tools, rather than going across,” says Walker. “Some of the skills are transferable, but more archaic knowledge about what it takes to manage costs can be Amazon-specific, Google-specific or whatever.”
Do you have two skillsets, two teams of authorities, or one specialised team that must be retrained? It can become quite a complex matrix, with personal career ambitions to consider too. People typically want to move into the most employable area, versus a company’s objective of having people across different providers.
This can make it tough to estimate costs – and training and certification is not cheap.
“You’ve also got to look at the cost in terms of rescaling somebody if you’re going to commit to a dual-platform strategy,” says Walker. “It’s a strategy that employers need to figure out.”
An approach that saves £10m over a few years due to more negotiating power can balance out the financial investment in training your people.
If you are going to the market for more skills, consider modifying job adverts to mention the chance of training to broaden the pool. Ask what is core to your requirement, and what is peripheral? Can you hire general cloud skills that can be augmented for your specialty?
Rob Smith, CTO at cloud services provider CreativeITC, agrees that organisations should be proactive to combat lock-in, about which he fields lots of queries and concerns. After all, it benefits the supplier too if they do not have to be always picking up the phone to help you through some issue or other.
An abstraction plan
A multicloud or Kubernetes strategy might help, but the issue comes when the customer decides they want to use native services inside these clouds – beginning with products unique to one hyperscaler but wanting to run something else at the same time with more privacy considerations, says Smith.
Supplier-agnostic tools should be leaned on where possible, he adds.
“We try for an almost 50/50 relationship, where they still have technical in-house and we’re helping them essentially behind the scenes, or maybe publicly, actually dealing with users,” says Smith. “Making sure we can take all the phone calls and answer tickets and have all knowledge base articles filled out.”
Rob Smith, CreativeITC
Driving skills transfer and development within customers tends to be a real-world activity. The best results are usually achieved by going in and seeing the customer face to face, shadowing and working alongside engineers on-site for example, rather than remotely.
What about large language models (LLMs) such as ChatGPT? Smith says they’re seeing some promise in delivering information to help engineers complete certain “third-line” senior tasks, as well as writing up replies to customers more quickly based on the ChatGPT response.
“With ChatGPT, we’ve been doing a lot of testing internally, ingesting a ticket from a customer and then adding an internal note to our engineers – potentially commentary that my first- and second-line engineers would not ordinarily know,” he says.
“But what I’m not looking for is a kind of AI [artificial intelligence]-powered helpdesk. Our engineers are looking at the tickets. We’re just looking at how to use ChatGPT to solve the issue quicker.”
Altering hiring policies, particularly if your team is not already quite diverse, may expose untargeted pools of talent too – helping reduce some of the skills shortage challenges, he adds. And don’t forget to offer and promote real opportunities that will attract people if you look to balance out supplier lock-in during a migration, says Smith.
“It’s difficult to find talent anyway, even though we recruit all over the world, and we do recruit locally as best we can,” he says.
LLMs can also inject efficiency into the production of documentation to guide staff, adds Christoph Dietzel, global head of products and research at Frankfurt internet exchange DE-CIX. His further prescription is to look at skills transfer across your core company layers – and prepare in advance of any move.
Sometimes it can be tough to convince people to change what they are doing, of course, especially if three or five years ago the same people were asked to move in the opposite direction.
“You need to align your IT strategy, the business strategy, with your team and organisational development strategy,” he says. “Don’t just do it as a side hustle.”
Do the sums
If you quantify the costs of having certain skills, you can then compare that directly to what you pay for that level of service with a hyperscaler or other supplier – remembering that, if you put your service in-house, that also entails skills and network connectivity.
“You need to keep the balance and have a good understanding of how the systems play together and monitor in a smarter way. In the old days. You configured everything manually; nowadays, you get templating.”
For Dietzel, it does not matter so much whether you want to connect AWS with Microsoft Azure, because essentially you have two workloads in different locations with interconnection, managed via a single-pane-of-glass view. The focus should be on related key role profiles.
Large-scale enterprise architecture is, after all, just another level of planning for how much you want to have in the cloud, how much and what is in your control, he says, and in any case it scales.
“We implemented their APIs [application programming interfaces], and your APIs might not have the same standard,” he says. “But in principle, from a monitoring management level, it’s fairly similar.”
Overall, balance out the risks of lock-in by monitoring and managing resource utilisation and training staff where you can. “Be precious about pricing,” says Cockroach’s Reid. “Use spot instances, where possible, elastic scaling up and down, and always monitor your cloud spend and cost trends.”
Read more about cloud deployment best practice
- Cloud-native strategies notwithstanding, workloads running on-premise or in private clouds will continue to present management challenges
- We look at why the cloud is not always the best choice, including for reasons of cost, application suitability, management, data protection and the needs of the business.
Read more on Software-as-a-Service (SaaS)
-
Microsoft goes public with pledges to foster innovation and drive competition in AI economy
-
IT Sustainability Think Tank: Closing the sustainability gap takes patience and persistence
-
SMEs upping security investment in face of growing threats
-
Kinaxis RapidResponse gets multiparty orchestration with MPO