Anatoly Stojko - stock.adobe.com

How Snowflake is tackling AI challenges

Snowflake’s regional leader Sanjay Deshmukh outlines how the company is helping customers to tackle the security, skills and cost challenges of AI implementations

Security, cost and the shortage of skills are among the primary concerns for organisations in ASEAN when embarking on artificial intelligence (AI) initiatives, according to a regional executive at Snowflake.

Speaking to Computer Weekly on Snowflake’s shift to enterprise AI, Sanjay Deshmukh, the company’s senior regional vice-president for ASEAN and India, said security is the foremost consideration among organisations implementing AI.

“The enterprise AI mindset is different and starts with data rather than copilots and large language models [LLMs] which come later,” he said. “But unless organisations are convinced about security, they will not put anything in production.”

To alleviate security concerns, Deshmukh explained that Snowflake, which does not use customer data to train AI models, including those from third parties, has built the Horizon platform. This platform allows organisations to discover and govern data, apps, and models with a built-in set of compliance, security, privacy, interoperability and access capabilities.

Horizon includes a trust centre that determines the current security posture of an account, end-to-end encryption to prevent third parties from reading data while at-rest or in transit, and granular authorisation controls to control access to objects.

“This governance model will always apply regardless of which machine learning model, LLM or analytics report is trying to access the data,” Deshmukh said. “And if there is personally identifiable information [PII] that has been tokenised or masked, it will always be masked and not be disclosed.”

Earlier this year, Snowflake became the target of a financially motivated threat actor that was suspected to have stolen a significant volume of records from Snowflake customer environments via stolen customer credentials. The affected customers included ticketing platform Live Nation, Santander Bank and US telecoms giant AT&T.

In a joint statement on their investigations into the matter, Snowflake, Mandiant and CrowdStrike said they had not identified evidence suggesting the activity was caused by a vulnerability, misconfiguration or breach of Snowflake’s platform.

They also did not find evidence suggesting the activity was caused by compromised credentials of current or former Snowflake employees, and that it appeared to be a targeted campaign directed at users with single-factor authentication.

Snowflake is developing a plan to require its customers to implement advanced security controls, like multi-factor authentication (MFA) or network policies, especially for privileged Snowflake customer accounts.

Plugging the skills gap

Noting that most companies tend to have more data analysts than AI engineers, Deshmukh said Snowflake aims to make it easy for them to leverage AI capabilities.

“If you’re using the Snowflake Cortex AI platform, you don’t have to worry about the underlying infrastructure – you can pick a model of your choice to solve your business problem,” he added.

Deshmukh said these models include Document AI that can extract data from documents, as well as Arctic, a LLM purpose-built that deals with unstructured data.

“The Arctic model will take your inputs and give you a copilot kind of chat interface to ask questions about customer agreements, for example. You don’t need a sophisticated AI engineer – all you need to do is build a pipeline to point the data to the model, and you can start using the copilot or build an application on top of it.”

By using purpose-built models that may not be the biggest or most expensive, Deshmukh said organisations can also expect cost efficiencies. “The customer doesn’t have to spend a lot of money to solve the business problem by picking a bigger model,” he added.

Since the debut of Cortex in November 2023, organisations across ASEAN have begun exploring the platform to develop AI applications and refine models. Deshmukh noted particular interest among skilled users in testing open-source models like llama 2 and Mistral, alongside Arctic, which excels in SQL generation for analytical tasks.

According to a study by TechTarget’s Enterprise Strategy Group, nearly all organisations encountered challenges when implementing AI, led by the limited availability of quality data for models, high costs, data privacy and security concerns, as well as the lack of expertise and talent.

Read more about AI in ASEAN and India

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close