LLM series - Sonatype: The good & the bad of LLMs-as-a-service
This is a guest post for the Computer Weekly Developer Network written by Ilkka Turunen in his capacity as field CTO at Sonatype.
From development to production and everything in between, Sonatype Lifecycle monitors the health and policy compliance of open source components – organisations can produce a Software Bill of Materials (SBOM) and remediate vulnerabilities quickly with full visibility.
Turunen writes in full as follows…
Large language models-as-a-service have a lot of short-term appeal for developers and the enterprises they work at.
But these are also a service model developers should be extremely careful about because LLMs by their nature are not bound by fact. They require very robust guidance to check their work and control bias.
Start positive
It’s worth starting with the positives.
They have the capacity to dramatically fast-forward development and simplify the integration of capabilities into applications through straightforward API calls. The addition of AI can range from test data generation to more traditional chat-oriented tasks that can speed development and deepen the end-user experience.
But there are also major limitations that developers should be aware of, chief among them inconsistency.
Today’s base models have to be prompted in a careful, considered way, often playing to each model’s limitations. The output can be inconsistent – by their nature, models “hallucinate” or don’t always stick to the script, providing output that is not always what was intended.
Count the cost
Another dimension to consider is cost.
Most API-accessible models are paid per token sent and received, which can accrue fast following extensive use. The calculations for these requests are not always straightforward and an intimate understanding of the payload is required.
You also have to factor in data privacy and security concerns because enterprises might not have the full visibility they need into how their data is being used by the service provider.
Recent prompt injection POCs have proven that it’s possible for end users to get to the underlying data or system prompts on the model – especially when using fine-tuned models. Also, from a legal point of view, copyright of the results has to be taken into account.
Lastly, if you rely on one particular service, the risk of vendor lock-in is extremely high. As soon as features start deprecating, or the vendor gets hit by outages or unexpected changes in model performance that don’t align with the specific task, you’re suddenly stuck with that situation.
The current providers are starting to mature, but it’s a long way to be fully assured of the SLAs of different layers.
This isn’t to say ‘LLM-as-a-service is a bad idea’ at all. It does mean developers should carefully consider an enterprise’s workflow, in addition to balancing the pros and cons.
We’re likely going to continue seeing more companies turning towards open-source AI solutions, and that’s going to bring a lot of familiar challenges related to consumption of any open-source components – it’s just that in this case, they’re the AI models themselves.