Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.
To use OpenAI LLM models, you have to set the OPENAI_API_KEY environment variable. You can obtain the OpenAI API key from the OpenAI Platform.Once you have obtained the key, you can use it like this:
To enable function calling in your application using embedchain and OpenAI, you need to pass functions into OpenAILlm class as an array of functions. Here are several ways in which you can achieve that:Examples:
Using Pydantic Models
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"class QA(BaseModel): """ A question and answer pair. """ question: str = Field( ..., description="The question.", example="What is a mountain?" ) answer: str = Field( ..., description="The answer.", example="A mountain is a hill." ) person_who_is_asking: str = Field( ..., description="The person who is asking the question.", example="John" ) @field_validator("question") def question_must_end_with_a_question_mark(cls, v): """ Validate that the question ends with a question mark. """ if not v.endswith("?"): raise ValueError("question must end with a question mark") return v @field_validator("answer") def answer_must_end_with_a_period(cls, v): """ Validate that the answer ends with a period. """ if not v.endswith("."): raise ValueError("answer must end with a period") return vllm = OpenAILlm(config=None,functions=[QA])app = App(llm=llm)result = app.query("Hey I am Sid. What is a mountain? A mountain is a hill.")print(result)
Using OpenAI JSON schema
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"json_schema = { "name": "get_qa", "description": "A question and answer pair and the user who is asking the question.", "parameters": { "type": "object", "properties": { "question": {"type": "string", "description": "The question."}, "answer": {"type": "string", "description": "The answer."}, "person_who_is_asking": { "type": "string", "description": "The person who is asking the question.", } }, "required": ["question", "answer", "person_who_is_asking"], },}llm = OpenAILlm(config=None,functions=[json_schema])app = App(llm=llm)result = app.query("Hey I am Sid. What is a mountain? A mountain is a hill.")print(result)
Using actual python functions
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"def find_info_of_pokemon(pokemon: str): """ Find the information of the given pokemon. Args: pokemon: The pokemon. """ req = requests.get(f"https://pokeapi.co/api/v2/pokemon/{pokemon}") if req.status_code == 404: raise ValueError("pokemon not found") return req.json()llm = OpenAILlm(config=None,functions=[find_info_of_pokemon])app = App(llm=llm)result = app.query("Tell me more about the pokemon pikachu.")print(result)
To use Google AI model, you have to set the GOOGLE_API_KEY environment variable. You can obtain the Google API key from the Google Maker Suite
import osfrom embedchain import Appos.environ["GOOGLE_API_KEY"] = "xxx"app = App.from_config(config_path="config.yaml")app.add("https://www.forbes.com/profile/elon-musk")response = app.query("What is the net worth of Elon Musk?")if app.llm.config.stream: # if stream is enabled, response is a generator for chunk in response: print(chunk)else: print(response)
Install related dependencies using the following command:
pip install --upgrade 'embedchain[cohere]'
Set the COHERE_API_KEY as environment variable which you can find on their Account settings page.Once you have the API key, you are all set to use it with Embedchain.
Install related dependencies using the following command:
pip install --upgrade 'embedchain[together]'
Set the TOGETHER_API_KEY as environment variable which you can find on their Account settings page.Once you have the API key, you are all set to use it with Embedchain.
Install related dependencies using the following command:
pip install --upgrade 'embedchain[opensource]'
GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:
from embedchain import App# load llm configuration from config.yaml fileapp = App.from_config(config_path="config.yaml")
First, set JINACHAT_API_KEY in environment variable which you can obtain from their platform.Once you have the key, load the app using the config yaml file:
First, set HUGGINGFACE_ACCESS_TOKEN in environment variable which you can obtain from their platform.Once you have the token, load the app using the config yaml file:
Llama2 is integrated through Replicate. Set REPLICATE_API_TOKEN in environment variable which you can obtain from their platform.Once you have the token, load the app using the config yaml file:
Setup Google Cloud Platform application credentials by following the instruction on GCP. Once setup is done, use the following code to create an app using VertexAI as provider:
from embedchain import App# load llm configuration from config.yaml fileapp = App.from_config(config_path="config.yaml")
If you can't find the specific LLM you need, no need to fret. We're continuously expanding our support for additional LLMs, and you can help us prioritize by opening an issue on our GitHub or simply reaching out to us on our Slack or Discord community.