SaaS series - HYCU: Connecting SaaS entities
This is a guest post for the Computer Weekly Developer Network written by Andy Fernandez in his capacity as director of product management at Hycu Inc – a company known for its Data Protection as-a-Service (DPaaS) platform that works wherever data sits i.e. in hybrid environments, on-premises, multi-cloud, SaaS apps or across the IoT edge.
Fernandez writes in full as follows…
I’ll talk about SaaS connectivity from the perspective of someone who’s observed the evolution and adoption of SaaS from the IT and data protection perspective.
QUESTION: How do we connect SaaS entities?
ANSWER: When diving into the motivations for ‘connecting’ SaaS entities, it’s evident that enhanced visibility, streamlined productivity workflows and cost efficiency stand out. Yet, equally important but often overlooked is the necessity for monitoring, security and data protection
Two critical scenarios come to the forefront when discussing these connections:
- Identity Management (MFA/SSO): In the landscape of SaaS applications, from Notion to Miro and Monday.com, a centralised system to manage user access and authentication isn’t just a benefit; it’s essential. This is especially true given that these platforms are prime targets for ransomware attacks.
However, a major challenge is the restricted access to advanced security features. Many SaaS platforms limit features like Multi-Factor Authentication (MFA) to their premium or enterprise tiers, making it challenging for SMB and midsized organisations on ‘standard’ plans.
The remedy? Opt for a Single Sign-On (SSO) system, such as Okta. With this, users only need a single set of credentials for various services, ensuring a consistent security level across all platforms, effectively guarding against threats like ransomware.
- Data Protection and Backups: It’s a common misconception that SaaS providers handle all aspects of data protection. However, the reality is more akin to a parking garage analogy. While the service provider ensures the facility’s functionality, individual users must protect their own ‘vehicles’ or data. In the SaaS world, if data gets deleted, corrupted, or encrypted, the responsibility lies with the customer.
This presents a challenge. While on-premises applications have standardised protection methods, each SaaS application exists in its distinct ‘walled garden’ with unique APIs, data structures and requirements. The solution? Actively ‘connect’ or integrate these platforms into a unified data protection strategy. This might sometimes involve devising custom scripts, especially within DevOps settings. Still, any protection mechanism is a step in the right direction.
QUESTION: Who is responsible for connecting SaaS services?
ANSWER: When considering who bears the responsibility for connecting SaaS services, the initial thought usually veers towards the admin or the main beneficiary of the SaaS service. These individuals often connect services to harness increased productivity and optimize their workflows. However, the responsibility is broader and spans across three distinct personas who play the crucial role of ‘connectors’:
- CISOs and SecOps: With the increasing adoption of SaaS solutions as a cornerstone of business operations, they simultaneously emerge as attractive targets for security breaches. The mantle therefore falls upon CISOs and SecOps professionals. Their mission is to ensure that SaaS applications are not just added to the organisation’s toolbox but are integrated with Single Sign-On (SSO) providers. This doesn’t end here; they must also be interlinked with Data Loss Prevention (DLP) mechanisms and vulnerability assessment workflows, effectively elevating the protection level of SaaS applications to match that of traditional on-premises solutions.
- IT Managers: The era of IT managers solely overseeing applications within their immediate purview has evolved. The proliferation of SaaS solutions has introduced complexities. Today, IT managers have a fresh mandate. They need to bridge the divide, ensuring that SaaS applications are seamlessly connected, particularly concerning critical areas like data protection, backup, recovery and regulatory compliance. This role sees them involved in nuanced tasks. For instance, setting up automated backups for popular apps like Notion or ensuring that data retention from a SaaS platform aligns with compliance norms through automation to storage solutions like S3.
- Cloud Architects: These professionals carry the daunting task of melding together a variety of cloud services, which range from SaaS to DBaaS and PaaS. Organizations, in their pursuit of cost efficiency and scalability, are progressively leaning on public cloud services. This transition brings to the table a rich array of microservices, including the likes of Lambda, DynamoDB and RDS. Cloud Architects ensure these services are effectively interconnected while simultaneously optimizing costs and ensuring smooth interoperability.
QUESTION: What elements of any given cloud stack need to be integrated externally, or indeed, internally?
ANSWER: At the core of any cloud stack lies compute, storage and networking. Elements like monitoring, security and data protection cannot exist in silos and must be interwoven seamlessly to maintain a robust and responsive cloud ecosystem.
External integration gains significance when considering certain specialized services that may outshine their equivalents within public cloud ecosystems. There are data warehouses and third-party PaaS solutions that stand out, offering unparalleled performance that often eclipses standard public cloud offerings. Moreover, the hyperscale giants in the cloud industry have birthed proprietary offerings, with databases or machine learning platforms being prime examples. Take BigQuery for instance, a platform that many organizations might favor for specific tasks over its competitors.
But let’s pivot to a non-negotiable aspect of integration – security and data protection. These elements require a comprehensive approach that extends beyond the borders of internal integration. Given the landscape where many enterprises are juggling multi-cloud architectures, sprinkled with a diverse mix of SaaS, DBaaS and PaaS applications, the safeguarding of data and ensuring its protection becomes a sprawling challenge. It’s paramount that these security measures envelop all platforms and services, obliterating any notion of boundaries, whether they be of a single application or an entire platform.
QUESTION: When – in the cloud-native software application development lifecycle – should we connect SaaS services and the wider cloud estate?
ANSWER: The SaaS service must be connected from the beginning if it is part of the application development lifecycle. This is especially important if we delve into the realms of security and data protection, it becomes clear that every SaaS application or cloud service within this lifecycle warrants continued protection and security mechanisms from the first day.
For organisations to foster a robust and secure cloud-native development environment, they need to grapple with and provide clear answers to pivotal questions for every service, application, or function within their operational stack:
- Is identity access securely established and maintained?
- What mechanisms are in place to backup data and configurations associated with this service?
- Are there comprehensive logging, notifications and monitoring capabilities in place to maintain oversight?
- Are there set protocols that provide guidance during any incidents, be they accidental glitches or deliberate malicious attacks?
QUESTION: Why does connecting the cloud give us a technology service that represents more than the sum of the parts within?
ANSWER: Connecting to the cloud isn’t merely about accessing another storage or compute resource, given we already have such capabilities on-premises. The cloud offers distinct advantages that elevate it above traditional computing environments.
For instance, with the cloud, an array of services is readily available. Whether you’re looking to expedite machine learning workflows, tap into deep learning, or set up efficient data warehouses, all these solutions are merely a few clicks away. Importantly, these cloud-based services are not only quick to deploy but also much easier to discontinue compared to those rooted in traditional data centers.
Furthermore, the cloud environment’s inherent adaptability is unmatched. It effortlessly accommodates vast workloads and unexpected traffic surges, ensuring seamless performance without requiring manual intervention or expansive infrastructure adjustments. By embracing the cloud, organisations can instantly access crucial infrastructure elements, from databases to warehouses and say goodbye to the arduous tasks associated with on-premises maintenance, patching and troubleshooting. This shift also eliminates substantial capital expenditures (CapEx) and introduces a flexible pay-as-you-go model.
The cloud’s value proposition is more than the sum of parts from within. It’s about the expansive scale, diverse resources and the significant savings in time, energy and costs that come with steering clear of on-premises management and maintenance.
QUESTION: When should we break SaaS connections – and how?
ANSWER: Break SaaS connections when the SaaS application becomes a burden rather than an asset. If its costs exceed its value due to over-architected systems with numerous webhooks and SaaS applications, it’s a sign to reconsider. Additionally, when the SaaS doesn’t meet your organization’s security standards, like lacking Multi-Factor Authentication, encryption, or data residency requirements, it’s crucial to disconnect to protect your organization’s assets and integrity. Always prioritize streamlined systems and robust security.
QUESTION: Where – in physical data sovereignty terms – can clouds be connected and where should borders, guardrails and port of entry checks be in place?
ANSWER: In terms of physical data sovereignty, it’s essential to have guardrails set from the outset, even before incorporating a SaaS service or linking it to any business-critical application.
Guardrails ensure that data remains within legal and regulatory boundaries, preventing potential data breaches or misuse. Moreover, it’s not uncommon to observe a small group within an organization adopting a SaaS, which quickly evolves into a business-critical tool, often unbeknownst to the IT department. Such oversight can introduce several vulnerabilities and risks. It’s essential to keep checks at every entry and exit point to maintain data security and sovereignty. Always stay vigilant and prioritise thorough oversight.
QUESTION: What roles do automation and the wider world of RPA play in the cloud connection SaaS landscape?
ANSWER: Automation and RPA have become vital cogs in the machinery of the cloud connection SaaS landscape. Imagine the immense value derived from reducing manual tasks and the subsequent errors that arise. Take, for instance, the automated provisioning of servers in platforms like AWS or Azure; this ensures that infrastructure scales seamlessly and efficiently. Then there’s the matter of data integration. With RPA bots, businesses can effortlessly synchronise data across different SaaS applications. Consider the ease of integrating tools like Salesforce with SAP, all without the need for manual input.
Automation is also a trusted ally in cost reduction. By keeping a keen eye on cloud resource usage, automation tools can scale resources down during periods of low demand, optimising costs. And when it comes to the crucial arena of data protection, automation emerges as a savior yet again. By scheduling regular backups for both cloud services and SaaS applications, businesses ensure they always have a lifeline – a copy of their precious data, safeguarded against unforeseen operational mishaps or cyber threats.
Automation and RPA are more than just convenience; they’re cornerstones of an efficient, resilient and cost-effective cloud connection landscape.
Beyond APIs
QUESTION: But as we know, cloud connectivity neither starts with or ends with APIs, so where do they fit into the equation and how should we use these invaluable conduit structures safely, effectively and efficiently?
ANSWER: APIs, while not the start or the end of cloud connectivity, play an essential role in weaving together the fabric of the cloud ecosystem. At their core, APIs serve as a universal connector, bridging gaps and allowing diverse systems to engage in conversation. Think of the intricate relationships among cloud services, infrastructure and GIT repositories when diving into the realm of application development.
Furthermore, the beauty of modern cloud architectures lies in their modular design. Each module, or component, offers a specific functionality. It’s APIs that facilitate the seamless interactions between these components, letting them coexist and collaborate without needing in-depth knowledge about one another. This paves the way for a system that champions extensibility and adaptability.
However, as we navigate this API-driven landscape, certain safety guidelines and best practices are paramount. Firstly, it’s crucial to make sure API calls are secure. Implement robust authentication and encryption measures and make it a habit to sift through access logs periodically. Anticipate the unexpected; the digital terrain is unpredictable. Whether it’s a SaaS application hiccup or a cloud service glitch, APIs should be resilient, armed with retry policies and backup plans. And never underestimate the power of clarity. Keeping comprehensive and user-friendly documentation ensures that API consumers can navigate without stumbling. Lastly, remain vigilant; monitoring API metrics like usage patterns, response times and error rates can preempt potential pitfalls and ensure optimal performance.
While APIs aren’t the entirety of cloud connectivity, they’re undeniably its lifeblood, enabling safe, effective and efficient integration across the board.
Data everything
QUESTION: If we further think about the existence of data warehouses, data lakehouses, data reefs (they don’t exist, but it’s in there to make sure you’re listening) and the interchange junctions that are now created by data marketplace & data exchanges, then how do we integrate these resources in the connected cloud conundrum and, for the sake of completeness, do we need to think about decoupling and separation of the same channels?
ANSWER: Integrating the tapestry of data resources into the ever-evolving connected cloud puzzle is no small feat. Leveraging tools that streamline data access is a game-changer, whether we’re talking about a Data Virtualisation platform or the emerging concept of a Data Mesh. To ensure seamless data flow between these myriad data pools, cloud-native ETL (Extract, Transform, Load) tools are instrumental. Plus, with most cloud vendors bringing to the table a suite of connectors and APIs, connecting various data sources has never been easier.
However, let me focus on ‘decoupling’ through the lens of security. When we’re juggling these vast data resources, there’s a significant amount of data movement. And wherever data moves, there lies potential risk. It’s paramount to encrypt this data in transit, diligently managing access controls to ensure only authorized eyes are on it.
Eggs in a basket
The concept of decoupling resonates with the age-old wisdom of not putting all your eggs in one basket. By architecting distinct boundaries between various data resources, we introduce a safety net. Picture a grand ship with multiple compartments. If water breaches one section, the entire vessel doesn’t have to go down. Similarly, when one data resource faces a security threat, it doesn’t have to spell disaster for the entire data ecosystem.
Each data resource, whether a warehouse holding confidential data or a marketplace with varying degrees of sensitivity, may demand tailored security protocols. In essence, while the cloud encourages seamless integration, it’s crucial to strike a balance. We aspire for fluidity in operations, but simultaneously, we must remain vigilant, erecting protective barriers and meticulous protocols to safeguard our invaluable data assets.
Andy Fernandez is director of product management at HYCU focused on public cloud and SaaS data protection. He is instrumental in developing and introducing SaaS data protection at HYCU along with the roll out of R-Cloud. Prior to joining HYCU, Andy held product marketing positions at Zerto, Veeam and Tech Data Cisco Solutions Group. He has extensive experience in product marketing, planning and strategy.