Tabnine AI coding assistant supports Anthropic Claude 3

Tabnine helps development teams of every size use AI to accelerate software development.

As an AI coding assistant technology, Tabnine has now announced Anthropic’s Claude 3 model is now available as one of the Large Language Models (LLMs) integrated to support the company’s own AI-enabled software development tools, using the API directly from Amazon Bedrock.

The collaboration between Tabnine and Anthropic, an AI safety and research company, is hoped to boost engineering velocity and code quality.

It is also hoped to improve developer happiness. 

Tabnine has built a specialised AI platform that supports code generation, code explanations, plus automated generation of documentation and tests.

Benefits include optimised development tools so users can access capabilities of Tabnine with Claude 3, including code generation, code explanations, documentation generation, AI-created tests etc.

Team-specific AI

There are also functions to get AI recommendations tailored to each team i.e. Tabnine’s users get accurate and development team-specific AI interactions and recommendations. 

This is delivered by using both context from locally available data in integrated development environments (IDEs) as well as Tabnine’s understanding of the full codebase of a user’s company.

“This integration with Anthropic will help developers deliver better code, faster. Our technology is the furthest along in the [AI code assistant] category,” said Brandon Jung, vice president, ecosystem and business development at Tabnine. “[Developers can] instantly select and switch models to best fit project requirements without having to change AI tools. Tabnine is designed specifically for software development use cases and tailors the interaction with each LLM to deliver the highest quality of AI assistance and recommendations.”

Earlier in 2024, Tabnine launched the switchable models capability for Tabnine Chat that allows users to switch the model that powers Tabnine Chat in real time. 

Specific prompts & context

To ensure that each LLM performs at its best, Tabnine refines the specific prompts and the context provided to each in order to ensure that the best possible answers to the software development queries a user makes are returned. 

This combination of Tabnine’s engineering and the performance of the underlying model maximises the value users get from each of the LLMs. With this, Tabnine is not only expanding the number of state-of-the-art models available on Tabnine Chat, but it is also helping users avoid lock-in for the underlying LLM.

“We’re excited to make the Claude 3 model family available to developers using Tabnine as a way to power their AI code assistant solution,” said Jamie Neuwirth, revenue leader at Anthropic. “The Claude 3 models are designed to offer industry-leading options across intelligence and speed, all while prioritising AI safety. We look forward to seeing the innovative solutions that Tabnine users build and put into production.”

By working with Anthropic, Tabnine further demonstrates its commitment to bringing new, state-of-the-art models to Tabnine Chat as soon as they become available in the market.

When considering which underlying model to select, Tabnine provides transparency into the behaviors and characteristics (e.g. security and performance) of each available model to help decide which is best for each unique use case.

CIO
Security
Networking
Data Center
Data Management
Close