Microsoft Autogen
AutoGen 是一个开源编程框架,用于构建AI智能体并促进多个智能体之间的协作以解决任务。
多智能体对话:AutoGen智能体可以相互通信以解决任务。这使得构建比单个大型语言模型(LLM)更复杂、更精巧的应用成为可能。
自定义:AutoGen智能体可以根据应用的特定需求进行自定义。这包括选择使用哪些LLM、允许哪些类型的人工输入以及使用哪些工具的能力。
人类参与:AutoGen允许人类参与。这意味着人类可以根据需要向智能体提供输入和反馈。
通过 Autogen-Qdrant集成,您可以构建由Qdrant高性能检索支持的Autogen工作流。
安装
pip install "autogen-agentchat[retrievechat-qdrant]"
用法
配置
import autogen
config_list = autogen.config_list_from_json("OAI_CONFIG_LIST")
config_list_from_json 函数首先查找环境变量 OAI_CONFIG_LIST,该变量需要是一个有效的JSON字符串。如果未找到,它会查找名为 OAI_CONFIG_LIST 的JSON文件。一个示例文件可以在这里找到。
为RetrieveChat构建智能体
我们首先初始化RetrieveAssistantAgent和QdrantRetrieveUserProxyAgent。RetrieveAssistantAgent的系统消息需要设置为“你是一个乐于助人的助手。”。详细说明在用户消息中给出。
from qdrant_client import QdrantClient
from sentence_transformers import SentenceTransformer
from autogen import AssistantAgent
from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent
# 1. Create an AssistantAgent instance named "assistant"
assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config={
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
},
)
sentence_transformer_ef = SentenceTransformer("all-distilroberta-v1").encode
client = QdrantClient(url="https://:6333/")
# 2. Create the RetrieveUserProxyAgent instance named "ragproxyagent"
# Refer to https://msdocs.cn/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent
# for more information on the RetrieveUserProxyAgent
ragproxyagent = RetrieveUserProxyAgent(
name="ragproxyagent",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
retrieve_config={
"task": "code",
"docs_path": [
"path/to/some/doc.md",
"path/to/some/other/doc.md",
],
"chunk_token_size": 2000,
"model": config_list[0]["model"],
"vector_db": "qdrant",
"db_config": {"client": client},
"get_or_create": True,
"overwrite": True,
"embedding_function": sentence_transformer_ef, # Defaults to "BAAI/bge-small-en-v1.5" via FastEmbed
},
code_execution_config=False,
)
运行智能体
# Always reset the assistant before starting a new conversation.
assistant.reset()
# We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.
# The assistant receives it and generates a response. The response will be sent back to the ragproxyagent for processing.
# The conversation continues until the termination condition is met.
qa_problem = "What is the .....?"
chat_results = ragproxyagent.initiate_chat(assistant, message=ragproxyagent.message_generator, problem=qa_problem)