Skip to content

Langchain humanmessage example json



 

Langchain humanmessage example json. * Here you would define your LLM and chat chain, call. Use only with unstructured tools; i. Defaults to OpenAI and PineconeVectorStore. invoke("Write Introduction. js. Your name is {name}. Messages are the inputs and outputs of ChatModels. Message for priming AI behavior, usually passed in as the first of a sequence of input messages. LangChain provides tooling to create and work with prompt templates. Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. from langchain import hub. Must return as output one of: 1. Below the text box, there are example questions that users might ask, such as "what is langchain?", "history of mesopotamia," "how to build a discord bot," "leonardo dicaprio girlfriend," "fun gift ideas for software engineers," "how does a prism separate light," and "what beer is best. Note that more powerful and capable models will perform better with complex schema and/or multiple functions. prompt = """ Today is Monday, tomorrow is Wednesday. Say you want your LLM to respond in a specific format. SystemMessage. agents import AgentExecutor, create_json_chat_agent. from_template (. memory import ConversationBufferMemory. 9,model_name="gpt-3. db (at least for macOS Ventura 13. In the OpenAI family, DaVinci can do reliably but Curie’s ability already Few Shot Prompt Templates. 今回は、 ChatOpenAI というクラスの内部でどのような処理が行われているのが、入力と出力に対する処理の観点から追ってみました。. Important LangChain primitives like LLMs, parsers, prompts, retrievers, and agents implement the LangChain Runnable Interface. Either this or example_selector should be provided. It’s not as complex as a chat model, and is used best with simple input Generate a JSON representation of the model, include and exclude arguments as per dict(). For example, for a message from an AI, this could include tool calls. This may have additional_kwargs in it - for example functional_call if using OpenAI Function calling. dumps(). tavily_search import TavilySearchResults. '), # 'parsing_error': None # } Example: Function-calling, dict schema (method="function_calling", include_raw=False):. A prompt for a language model is a set of instructions or input provided by a user to guide the model's response, helping it understand the context and generate relevant and coherent language-based output, such as answering questions, completing sentences, or engaging in a conversation. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. This agent uses JSON to format its outputs, and is aimed at supporting Chat Models. py", line 1032, in _take_next_step output = self. ChatOpenAI に ChatMessage 形式の入力を与えて Create the example set. 4). streamEvents () and streamLog (): these provide a way to First, let’s initialize Tavily and an OpenAI chat model capable of tool calling: from langchain_community. Evaluating extraction and function calling applications often comes down to validation that the LLM’s string output can be parsed correctly and how it compares to a reference object. loads(json. You also need to import HumanMessage and SystemMessage objects from the langchain. Retrieval Augmented Generation Chatbot: Build a chatbot over your data. , batching, streaming, and async support, while adding additional functionality. This goes over how to use an agent that uses XML when prompting. This example goes over how to use LangChain to interact with an Ollama-run Llama Mar 6, 2024 · In short, LangChain is a framework for developing applications that are powered by language models. dumps and json. Note there is a CassandraChatMessageHistory integration which may be easier to use for chat history storage; the CassandraKVStore is useful if you want a more general-purpose key-value store with prefixable keys. See AgentTypes documentation for more agent types. Below are a couple of examples to illustrate this -. Some language models (like Anthropic’s Claude) are particularly good at reasoning/writing XML. A key feature of chatbots is their ability to use content of previous conversation turns as context. LangServe helps developers deploy LangChain runnables and chains as a REST API. Language Model is a type of model that can generate text or complete text prompts. It will introduce the two different types of models - LLMs and ChatModels. llm=math_llm, tools, llm, agent=AgentType. . LangChain を使用する手順は以下の通りです。. ) Reason: rely on a language model to reason (about how to answer based on XML Agent. py", line 636, in plan return self. Reserved for additional payload data associated with the message. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – These are some of the more popular templates to get started with. 0. 5-turbo", max_tokens = 2048) system_text = "You are helpfull assistant that tells jokes" human_prompt = "Tell a joke" output_answer = llm Mar 6, 2023 · For example, the prompting strategies we had previously built all assumed that the output of the PromptTemplate was a string. param input_types: Dict May 4, 2023 · I use following approach in langchain. Here is an example of a basic prompt: from langchain. However, under the hood, it will be called with run_in_executor which can cause Aug 7, 2023 · Types of Splitters in LangChain. It wraps another Runnable and manages the chat message history for it. In addition 6 days ago · The class to format each example. runnables. The RunnableWithMessageHistory lets us add message history to certain types of chains. However, in cases where the chat model supports taking chat message with arbitrary role, you can In this guide, we’ll learn how to create a custom chat model using LangChain abstractions. human. utils. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=true. code-block:: python from langchain_core. Chat Models are a variation on language models. # Only certain models support this. code-block:: python from langchain_google_genai import ChatGoogleGenerativeAI chat = ChatGoogleGenerativeAI(model="gemini-pro") chat. SystemMessage This represents a system Dec 13, 2023 · This class is designed to handle few-shot learning scenarios by formatting the examples using the example_prompt attribute, which is a PromptTemplate used to format an individual example. messages import HumanMessage from langchain_mistralai . The above, but trimming old messages to reduce the amount of distracting information the model has to deal Jul 3, 2023 · Generate a JSON representation of the model, include and exclude arguments as per dict(). We’ll use the following packages: %pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai chromadb bs4. 言語モデル統合フレームワークとして、LangChainの使用ケースは、文書 This output parser allows users to specify an arbitrary JSON schema and query LLMs for outputs that conform to that schema. agent. 4 days ago · A dict with one key for all messages 3. - Chat Models are a variation on language models. The examples in LangChain documentation ( JSON agent , HuggingFace example) use tools with a single string input. "Action", A prompt template refers to a reproducible way to generate a prompt. The text splitters in Lang Chain have 2 methods — create documents and split documents. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – Nov 26, 2023 · Thought:Traceback (most recent call last): File "C:\Users\vicen\PycharmProjects\ChatBot\venv\Lib\site-packages\langchain\agents\agent. This allows us to pass in a list of Messages to the prompt using the “chat_history” input key, and these messages will be inserted after the system message and before the human message containing the latest question. Generally, this approach is the easiest to work with and is expected to yield good results. This example uses the OpenAI chat model. system . code-block:: python from langchain_openai import ChatOpenAI from langchain_core. Pass your API key using the google_api_key kwarg to the ChatGoogle constructor. Additionally, the decorator will use the function’s docstring as the tool’s description - so a docstring MUST be provided. prompts import SystemMessagePromptTemplate, ChatPromptTemplate system_message_template = SystemMessagePromptTemplate. In addition, it provides a client that can be used to call into runnables deployed on a server. 5 days ago · class langchain_core. A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs (as opposed to using plain text). You may want to use this class directly if you are managing memory outside of a chain. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs. This notebook covers how to have an agent return a structured output. These LLMs can structure output according to a given schema. The quick start will cover the basics of working with language models. LangChain provides a way to use language models in Python to produce text output based on text input. JsonValidityEvaluator Oct 1, 2023 · 応答はメッセージとなります。LangChainでは現在、AIMessage、HumanMessage、SystemMessage、ChatMessageの4種類のメッセージがサポートされています。ChatMessageは任意の役割パラメータを受け取ります。ほとんどの場合、HumanMessage、AIMessage、SystemMessageの扱いになるでしょう。 Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Note that LangSmith is not needed, but it is helpful. Uses OpenAI function calling. dumps(), other arguments as per json. prompt import PromptTemplate. Bases: HumanMessage, BaseMessageChunk. This class helps convert iMessage conversations to LangChain chat messages. import os. 3 days ago · Parse the output of an LLM call to a JSON object. To use AAD in Python with LangChain, install the azure-identity package. A JavaScript client is available in LangChain. However, now for chat models, the input needs to be a list of messages. Both have the same logic under the hood but one takes in a list of text This notebook shows how to use the iMessage chat loader. language_models ¶. input: RunInput. HumanMessage This represents a message from the user. Human Message chunk. ) # First we add a step to load memory. Local Retrieval Augmented Generation: Build Chat Models are a core component of LangChain. May 3, 2023 · from langchain. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – 6 days ago · class langchain_core. One of the core utility classes underpinning most (if not all) memory modules is the ChatMessageHistory class. LangChain has integrations with many model providers (OpenAI, Cohere, Hugging Face, etc. If you are planning to use the async API, it is recommended to use AsyncCallbackHandler to avoid blocking the runloop. The following JSON validators provide functionality to check your model’s output consistently. plan( ^^^^^ File "C:\Users\vicen\PycharmProjects\ChatBot\venv\Lib\site-packages\langchain\agents\agent. 2. tools import tool from langchain_openai import ChatOpenAI @tool def count_emails (last_n_days: int)-> int: """Multiply two integers together. Here is an example input for a recommender tool. The ``GOOGLE_API_KEY``` environment variable set with your API key, or 2. Use the most basic and common components of LangChain: prompt templates, models, and output parsers. This interface provides two general approaches to stream content: . This includes all inner runs of LLMs, Retrievers, Tools, etc. encoder is an optional function to supply as default to json. Here is an example: Memory management. chat = ChatOpenAI(temperature=0) Like the LLM model, this also has multiple settings that can be adjusted, such as: 2 days ago · langchain_core. The example below demonstrates how to use this feature. The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – Sep 6, 2019 · Human are AGI so they can certainly be used as a tool to help out AI agent when it is confused. param additional_kwargs: dict [Optional] ¶. If you would rather manually specify your API key and/or organization ID, use the following code: chat = ChatOpenAI(temperature=0, openai_api_key="YOUR_API_KEY", openai_organization Apr 8, 2023 · perform db operations to write to and read from database of your choice, I'll just use json. We need to set environment variable OPENAI_API_KEY, which can be done directly or loaded from a . 181 or above) to interact with multiple CSV from langchain. Advanced if you use a sync CallbackHandler while using an async method to run your LLM / Chain / Tool / Agent, it will still work. It works by taking a big source of data, take for example a 50-page PDF, and breaking it down into "chunks" which are then embedded into a Vector Store. The autoreload extension is already loaded. Mar 17, 2024 · Here's a quick step-by-step guide with sample code: Import the JSON Loader Module: The first thing you need to do is import the JSONLoader module from Langchain. When used in streaming mode, it will yield partial JSON objects containing all the keys that have been returned so far. You can customize prompt_func and input_func according to your need (as shown below). 4 days ago · langchain_core. Async callbacks. Assistant is designed to be able to assist with a wide range of tasks, from answering \ simple questions to providing in-depth explanations and discussions on a 2 days ago · langchain_core. chat_models import ChatOpenAI from langchain. runnables import Runnable, RunnableLambda, RunnablePassthrough from langchain_core. As an example a very naive approach that simply extracts everything between the first { and the last } const naiveJSONFromText = (text) => {. LangChain strives to create model agnostic templates to 2 days ago · To use, you must have either: 1. The examples below use Mistral. {. from langchain_openai import ChatOpenAI. schema import ( AIMessage, HumanMessage, SystemMessage ) Then you initialize the chat agent. """ return last_n_days * 2 @tool Aug 17, 2023 · 5. stream (): a default implementation of streaming that streams the final output from the chain. e. LangChain itself does not provide a language model, but it lets you leverage models like OpenAI’s GPT, Anthropic, Hugging Face, Azure OpenAI, and many others (though OpenAI’s GPT models have the most robust support at the moment). 1 day ago · langchain_community. Simple Diagram of creating a Vector Store The best known example of this is function_call from OpenAI. loads to illustrate; retrieve_from_db = json. Since the tools in the semantic layer use slightly more complex inputs, I had to dig a little deeper. AIMessage This represents a message from the model. LLM Agent with Tools: Extend the agent with access to multiple tools and test that it uses them to answer questions. Feb 20, 2024 · Tools in the semantic layer. schema. ) and exposes a standard interface to interact with all of these models. env file like so: import getpass. tools. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables. 本文書では、まず、LangChain のインストール方法と環境設定の方法を説明します。. Implementations guidelines: Implementations are expected to over-ride all or some of the following methods: add_messages: sync variant for bulk addition of messages. In this quickstart we'll show you how to: Get setup with LangChain and LangSmith. Wrap a Runnable with additional functionality. The IMessageChatLoader loads from this database file. Ollama allows you to run open-source large language models, such as Llama 2, locally. On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat. A dict with one key for the current input string/message (s) and. chat_models ¶. schema module. param example_selector: Any = None ¶ ExampleSelector to choose the examples to format into the prompt. from langchain. chat = ChatOpenAI(temperature=0) The above cell assumes that your OpenAI API key is set in your environment variables. By default, most of the agents return a single string. few_shot import FewShotPromptTemplate. Oct 10, 2023 · Language model. Utilize the HuggingFaceTextGenInference , HuggingFaceEndpoint , or HuggingFaceHub integrations to instantiate an LLM. Parameters. ipynb <-- Example of LangChain (0. The complete list is here. A template may include instructions, few-shot examples, and specific context and questions appropriate for a given task. このページでは、LangChain を Python で使う方法について紹介します。. export LANGCHAIN_API_KEY=YOUR_KEY. base. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – Returning Structured Output. ヒント. ipynb <-- Example of using LangChain to interact with CSV data via chat, containing a verbose switch to show the LLM thinking process. " HumanMessage from @langchain/core/messages; Here is an example using an extraction use-case: // Use a JsonOutputToolsParser to get the parsed JSON response 3 days ago · Here's an example:. client, }); // Define our encoder/decoder for converting between strings and Uint8Arrays. aadd_messages: async variant for bulk addition of messages. chains import ConversationChain. a set of few shot examples to help the language model generate a better response, a question to the language model. We'll go over an example of how to design and implement an LLM-powered chatbot. You can use ChatPromptTemplate, for setting the context you can use HumanMessage and AIMessage prompt. %load_ext autoreload %autoreload 2. The above, but trimming old messages to reduce the amount of distracting information the model has to deal Cassandra KV. LangChain offers an experimental wrapper around open source models run locally via Ollama that gives it the same API as OpenAI Functions. Few shot prompting is a prompting technique which provides the Large Language Model (LLM) with a list of examples, and then asks the LLM to generate some text following the lead of the examples provided. // If a template is passed in, the Nov 15, 2023 · Integrated Loaders: LangChain offers a wide variety of custom loaders to directly load data from your apps (such as Slack, Sigma, Notion, Confluence, Google Drive and many more) and databases and use them in LLM applications. In particular, we will: 1. JSON Evaluators. Here are a few of the high-level components we'll be working with: Chat Models. ZERO_SHOT_REACT_DESCRIPTION, verbose=True, In the above code you can see the tool takes input directly from command line. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – Quickstart. chat_models module. To get started, create a list of few-shot examples. A string which can be treated as an AIMessage 2. Specifically, it can be used for any Runnable that takes as input one of. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. It will then cover how to use PromptTemplates to format the inputs to these models, and how to use Output Parsers to work with the outputs. Oct 8, 2023 · LLMアプリケーション開発のためのLangChain 前編② プロンプトトテンプレート. Since prompting is at the core of a lot of LangChain utilities and functionalities, this is a change that cuts pretty deep. tools = [TavilySearchResults(max_results=1)] # Choose the LLM that will drive the agent. その後、LLM を利用したアプリケーションの実装で HumanMessage from @langchain/core/messages With sample messages This feature can help the model better understand the return information the user wants to get, including but not limited to the content, format, and response mode of the information. Nov 6, 2023 · To pass the "response_format" parameter as json using the model_kwargs variable in the ChatOpenAI class of the LangChain Python framework, you can add the "response_format" key to the model_kwargs dictionary when initializing the ChatOpenAI class. Finally, set the OPENAI_API_KEY environment variable to the token value. langchain_core. pydantic_v1 import BaseModel from langchain_core. This state management can take several forms, including: Simply stuffing previous messages into a chat model prompt. HumanMessage|AIMessage] retrieved_messages = messages_from_dict(retrieve_from_db) chat_with_csv_verbose. The former allows you to specify human 5 days ago · Base abstract Message class. JSON Chat Agent. Simple use case for ChatOpenAI in langchain. " 3 days ago · Generate a JSON representation of the model, include and exclude arguments as per dict(). LangChainは、大規模な言語モデルを使用したアプリケーションの作成を簡素化するためのフレームワークです。. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – We’ll use a prompt that includes a MessagesPlaceholder variable under the name “chat_history”. Build a simple application with LangChain. llms import OpenAI. "You are a helpful AI bot. Nov 2, 2023 · Make your application code more resilient towards non JSON-only for example you could implement a regular expression to extract potential JSON strings from a response. Example: . include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – Generate a JSON representation of the model, include and exclude arguments as per dict(). output_parsers import JsonOutputToolsParser from langchain_core. function 3 days ago · Generate a JSON representation of the model, include and exclude arguments as per dict(). LangChain has two main classes to work with language models: - LLM classes provide access to the large language model ( LLM) APIs and services. The most commonly used are AIMessagePromptTemplate , SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively. Default Tools 3 days ago · Generate a JSON representation of the model, include and exclude arguments as per dict(). While Chat Models use language models under the hood, the interface they expose is a bit different. ChatOllama. Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining. This is a super lightweight wrapper that provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all. Extraction with OpenAI Functions: Do extraction of structured data from unstructured data. const store = new VercelKVStore({. all_genres = [. chat_models import ChatMistralAI # If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable. parse(full_output) ^^^^^ File "C:\Users\vicen This @tool decorator is the simplest way to define a custom tool. It enables applications that: Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc. An example of this is their web-search connector which allows you to pass in a query and the API will search the web for relevant documents. memory = ConversationBufferMemory(. * the LLM and eventually get a list of messages. output_parser. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Specify the Path to Your JSON File: Once you've imported the module, specify the path to the JSON file you want to load. from operator import itemgetter. LangChain is a framework for developing applications powered by language models. Abstract base class for storing chat message history. chat_history. a separate key for historical messages. const encoder = new TextEncoder(); const decoder = new TextDecoder(); /**. Therefore, it is really important that they are clear and describe exactly how the tool should be used. LangChain provides different types of MessagePromptTemplate. Jun 1, 2023 · In short, LangChain just composes large amounts of data that can easily be referenced by a LLM with as little computation power as possible. Pass in content as positional arg. A good example of this is an agent tasked with doing question-answering over some sources. If the input key points to a string, it will be treated as a HumanMessage in history. chains import LLMChain. include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – from langchain_openai import ChatOpenAI. schema import ( AIMessage, HumanMessage, SystemMessage ) llm = ChatOpenAI(temperature=0. Generate a JSON representation of the model, include and exclude arguments as per dict(). ¶. dumps(ingest_to_db)) transform the retrieved serialized object back to List[langchain. Any class that inherits from Runnable can be Knowledge Base: Create a knowledge base of "Stuff You Should Know" podcast episodes, to be accessed through a tool. The jsonpatch ops can be applied in order to construct state. from langchain_core. There are 3 broad approaches for information extraction using LLMs: Tool/Function Calling Mode: Some LLMs support a tool or function calling mode. SystemMessage ¶. Memory management. examples = [. prompts. chat_with_multiple_csv. However, there are a few points to consider: Make sure that the examples you provide are relevant to the task you want your model to perform. For a deeper conceptual guide into these topics Oct 13, 2023 · To create a chat model, import one of the LangChain-supported chat models, from the langchain. Use with regular LLMs, not with chat models. This library is integrated with FastAPI and uses pydantic for data validation. prompts import ChatPromptTemplate, MessagesPlaceholder system = '''Assistant is a large language model trained by OpenAI. Either this or examples should be provided. In streaming, if diff is set to True, yields JSONPatch operations describing the difference between the previous and the current object. llm = OpenAI (model_name="text-davinci-003", openai_api_key="YourAPIKey") # I like to use three double quotation marks for my prompts because it's easier to read. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. Some language models are particularly good at writing JSON. This notebook shows how to get started using Hugging Face LLM’s as chat models. Oct 25, 2023 · Here is an example of how you can create a system message: from langchain. 2 days ago · The weight is the same, but the volume or density of the objects may differ. param examples: Optional [List [dict]] = None ¶ Examples to format into the prompt. RunnableBinding. loaders import JSONLoader. HumanMessageChunk [source] ¶. messages. JSON Mode: Some LLMs are can be forced to The best way to do this is with LangSmith. Generally consists only of content. Wrapping your LLM with the standard ChatModel interface allow you to use your LLM in existing LangChain programs with minimal code modifications! As an bonus, your LLM will automatically become a LangChain Runnable and will benefit from some Overview. Quick Start. Utilize the ChatHuggingFace class to enable any of these LLMs to interface with LangChain’s Chat Messages 5 days ago · Generate a JSON representation of the model, include and exclude arguments as per dict(). A RunnableBinding can be thought of as a “runnable decorator” that preserves the essential features of Runnable; i. Apr 4, 2023 · Basic Prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt. , tools that accept a single string input. a dict with a key that takes the latest message (s) as a string or sequence of BaseMessage, and a separate key For returning the retrieved documents, we just need to pass them through all the way. system. It can often be useful to have an agent return something with more structure. Ollama Functions. . classmethod lc_id → List [str] ¶ A unique identifier for this class for serialization purposes. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool. return_messages=True, output_key="answer", input_key="question". param content: Union[str, List[Union[str, Dict 4 days ago · Generate a JSON representation of the model, include and exclude arguments as per dict(). from langchain_community. You can few shot prompt the LLM with a list of question Aug 21, 2023 · langchain は言語モデルの扱いを簡単にするためのラッパーライブラリです。. Prompt templates are predefined recipes for generating prompts for language models. BaseChatMessageHistory [source] ¶. This example demonstrates how to setup chat history storage using the CassandraKVStore BaseStore integration. chat_models import ChatOpenAI. Below is the working code sample. In the following example, we import the ChatOpenAI model, which uses OpenAI LLM at the backend. It optimizes setup and configuration details, including GPU usage. ef cb gl lw ba fx vx vp hw qz