Twelve Labs

Twelve Labs Embed API 提供强大的嵌入,可在统一的向量空间中表示视频、文本、图像和音频。该空间支持跨不同类型内容的任意搜索。

通过原生处理所有模态,它可以捕捉视觉表达、语音和上下文等交互,从而能够精确高效地实现情感分析、异常检测和推荐系统等高级应用。

我们将了解如何通过 Python 和 Node SDK 在 Qdrant 中使用 Twelve Labs 嵌入。

安装 SDK

$ pip install twelvelabs qdrant-client
$ npm install twelvelabs-js @qdrant/js-client-rest

设置客户端

from twelvelabs import TwelveLabs
from qdrant_client import QdrantClient

# Get your API keys from:
# https://playground.twelvelabs.io/dashboard/api-key
TL_API_KEY = "<YOUR_TWELVE_LABS_API_KEY>"

twelvelabs_client = TwelveLabs(api_key=TL_API_KEY)
qdrant_client = QdrantClient(url="http://localhost:6333/")
import { QdrantClient } from '@qdrant/js-client-rest';
import { TwelveLabs, EmbeddingsTask, SegmentEmbedding } from 'twelvelabs-js';

// Get your API keys from:
// https://playground.twelvelabs.io/dashboard/api-key
const TL_API_KEY = "<YOUR_TWELVE_LABS_API_KEY>"

const twelveLabsClient = new TwelveLabs({ apiKey: TL_API_KEY });
const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' });

以下示例使用 "Marengo-retrieval-2.7" 模型嵌入视频。它生成维度为 1024 的向量嵌入,并使用余弦相似度。

您可以使用相同的模型将音频、文本和图像嵌入到同一个向量空间中。实现跨模态搜索!

嵌入视频

task = twelvelabs_client.embed.task.create(
    model_name="Marengo-retrieval-2.7",
    video_url="https://sample-videos.com/video321/mp4/720/big_buck_bunny_720p_2mb.mp4"
)

task.wait_for_done(sleep_interval=3)

task_result = twelvelabs_client.embed.task.retrieve(task.id)
const task = await twelveLabsClient.embed.task.create("Marengo-retrieval-2.7", {
    url: "https://sample-videos.com/video321/mp4/720/big_buck_bunny_720p_2mb.mp4"
})

await task.waitForDone(3)

const taskResult = await twelveLabsClient.embed.task.retrieve(task.id)

将模型输出转换为 Qdrant 点

from qdrant_client.models import PointStruct

points = [
    PointStruct(
        id=idx,
        vector=v.embeddings_float,
        payload={
            "start_offset_sec": v.start_offset_sec,
            "end_offset_sec": v.end_offset_sec,
            "embedding_scope": v.embedding_scope,
        },
    )
    for idx, v in enumerate(task_result.video_embedding.segments)
]
let points = taskResult.videoEmbedding.segments.map((data, i) => {
    return {
        id: i,
        vector: data.embeddingsFloat,
        payload: {
            startOffsetSec: data.startOffsetSec,
            endOffsetSec: data.endOffsetSec,
            embeddingScope: data.embeddingScope
        }
    }
})

创建集合以插入向量

from qdrant_client.models import VectorParams, Distance

collection_name = "twelve_labs_collection"

qdrant_client.create_collection(
    collection_name,
    vectors_config=VectorParams(
        size=1024,
        distance=Distance.COSINE,
    ),
)
qdrant_client.upsert(collection_name, points)
const COLLECTION_NAME = "twelve_labs_collection"

await qdrantClient.createCollection(COLLECTION_NAME, {
    vectors: {
        size: 1024,
        distance: 'Cosine',
    }
});

await qdrantClient.upsert(COLLECTION_NAME, {
    wait: true,
    points
})

添加向量后,您可以在不同模态之间运行语义搜索。我们来试试文本。

text_segment = twelvelabs_client.embed.create(
    model_name="Marengo-retrieval-2.7",
    text="<YOUR_QUERY_TEXT>",
).text_embedding.segments[0]

qdrant_client.query_points(
    collection_name=collection_name,
    query=text_segment.embeddings_float,
)
const textSegment = (await twelveLabsClient.embed.create({
    modelName: "Marengo-retrieval-2.7",
    text: "<YOUR_QUERY_TEXT>"
})).textEmbedding.segments[0]

await qdrantClient.query(COLLECTION_NAME, {
    query: textSegment.embeddingsFloat,
});

我们来试试音频

audio_segment = twelvelabs_client.embed.create(
    model_name="Marengo-retrieval-2.7",
    audio_url="https://codeskulptor-demos.commondatastorage.googleapis.com/descent/background%20music.mp3",
).audio_embedding.segments[0]

qdrant_client.query_points(
    collection_name=collection_name,
    query=audio_segment.embeddings_float,
)
const audioSegment = (await twelveLabsClient.embed.create({
    modelName: "Marengo-retrieval-2.7",
    audioUrl: "https://codeskulptor-demos.commondatastorage.googleapis.com/descent/background%20music.mp3"
})).audioEmbedding.segments[0]

await qdrantClient.query(COLLECTION_NAME, {
    query: audioSegment.embeddingsFloat,
});

类似地,通过图像查询

image_segment = twelvelabs_client.embed.create(
    model_name="Marengo-retrieval-2.7",
    image_url="https://gratisography.com/wp-content/uploads/2024/01/gratisography-cyber-kitty-1170x780.jpg",
).image_embedding.segments[0]

qdrant_client.query_points(
    collection_name=collection_name,
    query=image_segment.embeddings_float,
)
const imageSegment = (await twelveLabsClient.embed.create({
    modelName: "Marengo-retrieval-2.7",
    imageUrl: "https://gratisography.com/wp-content/uploads/2024/01/gratisography-cyber-kitty-1170x780.jpg"
})).imageEmbedding.segments[0]

await qdrantClient.query(COLLECTION_NAME, {
    query: imageSegment.embeddingsFloat,
});

进一步阅读

此页面有帮助吗?

感谢您的反馈!🙏

很抱歉听到您不满意。😔 您可以在 GitHub 上编辑此页面,或创建一个 GitHub issue。