ClovaXEmbeddings
This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on ClovaXEmbeddings
features and configuration options, please refer to the API reference.
Overview
Integration details
Provider | Package |
---|---|
Naver | langchain-community |
Setup
Before using embedding models provided by CLOVA Studio, you must go through the three steps below.
- Creating NAVER Cloud Platform account
- Apply to use CLOVA Studio
- Find API Keys after creating CLOVA Studio Test App or Service App (See here.)
Credentials
CLOVA Studio requires 3 keys (NCP_CLOVASTUDIO_API_KEY
, NCP_APIGW_API_KEY
and NCP_CLOVASTUDIO_APP_ID
) for embeddings.
NCP_CLOVASTUDIO_API_KEY
andNCP_CLOVASTUDIO_APP_ID
is issued per serviceApp or testAppNCP_APIGW_API_KEY
is issued per account
The two API Keys could be found by clicking App Request Status
> Service App, Test App List
> ‘Details’ button for each app
in CLOVA Studio.
import getpass
import os
if not os.getenv("NCP_CLOVASTUDIO_API_KEY"):
os.environ["NCP_CLOVASTUDIO_API_KEY"] = getpass.getpass(
"Enter NCP CLOVA Studio API Key: "
)
if not os.getenv("NCP_APIGW_API_KEY"):
os.environ["NCP_APIGW_API_KEY"] = getpass.getpass("Enter NCP API Gateway API Key: ")
os.environ["NCP_CLOVASTUDIO_APP_ID"] = input("Enter NCP CLOVA Studio App ID: ")
Installation
ClovaXEmbeddings integration lives in the langchain_community
package:
# install package
!pip install -U langchain-community
Instantiation
Now we can instantiate our embeddings object and embed query or document:
- There are several embedding models available in CLOVA Studio. Please refer here for further details.
- Note that you might need to normalize the embeddings depending on your specific use case.
from langchain_community.embeddings import ClovaXEmbeddings
embeddings = ClovaXEmbeddings(
model="clir-emb-dolphin", # set with the model name of corresponding app id. Default is `clir-emb-dolphin`
# app_id="..." # set if you prefer to pass app id directly instead of using environment variables
)
Indexing and Retrieval
Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials.
Below, see how to index and retrieve data using the embeddings
object we initialized above. In this example, we will index and retrieve a sample document in the InMemoryVectorStore
.
# Create a vector store with a sample text
from langchain_core.vectorstores import InMemoryVectorStore
text = "CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models."
vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)
# Use the vectorstore as a retriever
retriever = vectorstore.as_retriever()
# Retrieve the most similar text
retrieved_documents = retriever.invoke("What is CLOVA Studio?")
# show the retrieved document's content
retrieved_documents[0].page_content
'CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models.'
Direct Usage
Under the hood, the vectorstore and retriever implementations are calling embeddings.embed_documents(...)
and embeddings.embed_query(...)
to create embeddings for the text(s) used in from_texts
and retrieval invoke
operations, respectively.
You can directly call these methods to get embeddings for your own use cases.
Embed single texts
You can embed single texts or documents with embed_query
:
single_vector = embeddings.embed_query(text)
print(str(single_vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.
Embed multiple texts
You can embed multiple texts with embed_documents
:
text2 = "LangChain is the framework for building context-aware reasoning applications"
two_vectors = embeddings.embed_documents([text, text2])
for vector in two_vectors:
print(str(vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.
[-0.25525448, -0.84877056, -0.6928286, 1.5867524, -1.2930486, -0.8166254, -0.17934391, 1.4236152, 0.
Additional functionalities
Service App
When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See here.)
For a Service App, corresponding NCP_CLOVASTUDIO_API_KEY
and NCP_CLOVASTUDIO_APP_ID
are issued and can only be called with them.
# Update environment variables
os.environ["NCP_CLOVASTUDIO_API_KEY"] = getpass.getpass(
"Enter NCP CLOVA Studio API Key for Service App: "
)
os.environ["NCP_CLOVASTUDIO_APP_ID"] = input("Enter NCP CLOVA Studio Service App ID: ")
embeddings = ClovaXEmbeddings(
service_app=True,
model="clir-emb-dolphin", # set with the model name of corresponding app id of your Service App
# app_id="..." # set if you prefer to pass app id directly instead of using environment variables
)
API Reference
For detailed documentation on ClovaXEmbeddings
features and configuration options, please refer to the API reference.
Related
- Embedding model conceptual guide
- Embedding model how-to guides