关键词列表
| 术语 | 英文 | 重要性 |
|---|---|---|
| 跨模态对齐 | Cross-modal Alignment | ⭐⭐⭐⭐⭐ |
| 模态丢失 | Modal Collapse | ⭐⭐⭐⭐⭐ |
| 视觉Token效率 | Vision Token Efficiency | ⭐⭐⭐⭐ |
| 音频理解局限 | Audio Understanding | ⭐⭐⭐⭐ |
| 跨模态推理 | Cross-modal Reasoning | ⭐⭐⭐⭐⭐ |
| 多模态幻觉 | Multimodal Hallucination | ⭐⭐⭐⭐⭐ |
| 特征融合 | Feature Fusion | ⭐⭐⭐⭐ |
| 对齐预算 | Alignment Budget | ⭐⭐⭐⭐ |
| 模态不平衡 | Modality Imbalance | ⭐⭐⭐⭐ |
| 联合表示学习 | Joint Representation Learning | ⭐⭐⭐⭐ |
多模态融合挑战:跨模态理解的核心难题
一、多模态AI的愿景与现实
1.1 多模态融合的理想
多模态人工智能的核心目标是构建能够像人类一样同时理解和处理多种感知信息(视觉、听觉、语言等)的智能系统。这种能力对于构建真正通用的人工智能至关重要——人类认知建立在多种感官的协同工作之上,视觉提供空间信息,听觉提供时序信息,触觉提供物理属性,这些信息共同构成了我们对世界的完整理解。
理想的多模态系统应当具备以下能力:
- 无缝整合来自不同模态的信息
- 在任一模态缺失时仍能正常工作
- 跨模态推理和知识迁移
- 模态间的语义一致性
1.2 当前技术的局限性
然而,当前多模态系统在实现这一愿景时面临重重挑战。从GPT-4V到Gemini,多模态大模型虽然取得了显著进展,但仍然存在根本性的技术壁垒:
核心问题:
- 模态间的语义对齐不完美
- 不同模态的信息密度差异巨大
- 跨模态推理能力有限
- 模态丢失(Modal Collapse)现象
- 多模态幻觉比单模态更复杂
二、跨模态对齐问题
2.1 对齐的本质挑战
**跨模态对齐(Cross-modal Alignment)**指的是将来自不同感知模态的信息映射到统一的语义空间的能力。例如,“一只狗在草地上奔跑”这个概念,需要能够将图像中的像素、文本描述和可能的声音片段映射到同一个概念表示。
数学形式化:
给定来自不同模态的输入 (视觉)、(文本)、(音频),我们需要学习映射函数 ,使得:
其中 是距离度量, 是阈值。
2.2 视觉-语言对齐的技术路径
1. 对比学习范式(CLIP及其变体)
CLIP通过对比学习实现图像-文本对齐:
import torch
import torch.nn as nn
import torch.nn.functional as F
class CLIPLoss(nn.Module):
"""
CLIP对比损失
"""
def __init__(self, temperature=0.07):
super().__init__()
self.temperature = temperature
def forward(self, image_features, text_features):
"""
image_features: [batch_size, dim]
text_features: [batch_size, dim]
"""
# L2归一化
image_features = F.normalize(image_features, p=2, dim=1)
text_features = F.normalize(text_features, p=2, dim=1)
# 计算相似度矩阵
logits = (image_features @ text_features.T) / self.temperature
# 标签:对角线是正样本
batch_size = image_features.shape[0]
labels = torch.arange(batch_size, device=image_features.device)
# 对称损失
image_loss = F.cross_entropy(logits, labels)
text_loss = F.cross_entropy(logits.T, labels)
return (image_loss + text_loss) / 2
class ImageTextAligner(nn.Module):
"""
图像-文本对齐器
"""
def __init__(self, vision_encoder, text_encoder, projection_dim=512):
super().__init__()
self.vision_encoder = vision_encoder
self.text_encoder = text_encoder
# 投影层
self.image_projection = nn.Linear(
vision_encoder.output_dim, projection_dim
)
self.text_projection = nn.Linear(
text_encoder.output_dim, projection_dim
)
self.loss_fn = CLIPLoss()
def encode_image(self, image):
vision_output = self.vision_encoder(image)
return self.image_projection(vision_output)
def encode_text(self, text):
text_output = self.text_encoder(text)
return self.text_projection(text_output)
def forward(self, image, text):
img_feat = self.encode_image(image)
txt_feat = self.encode_text(text)
return self.loss_fn(img_feat, txt_feat)2. 跨注意力机制
另一种对齐方法是使用跨注意力机制,让不同模态的表示直接交互:
class CrossModalAttention(nn.Module):
"""
跨模态注意力机制
"""
def __init__(self, vision_dim, text_dim, hidden_dim, num_heads=8):
super().__init__()
self.vision_proj = nn.Linear(vision_dim, hidden_dim)
self.text_proj = nn.Linear(text_dim, hidden_dim)
self.cross_attention = nn.MultiheadAttention(
hidden_dim, num_heads, batch_first=True
)
# 模态特定归一化
self.vision_norm = nn.LayerNorm(hidden_dim)
self.text_norm = nn.LayerNorm(hidden_dim)
def forward(self, vision_features, text_features):
"""
vision_features: [batch, num_patches, vision_dim]
text_features: [batch, seq_len, text_dim]
"""
# 投影到统一空间
vision_proj = self.vision_proj(vision_features)
text_proj = self.text_proj(text_features)
# 图像到文本的注意力
# "看这张图时,文本关注图像的哪些部分?"
vision_to_text, _ = self.cross_attention(
query=text_proj,
key=vision_proj,
value=vision_proj
)
# 文本到图像的注意力
text_to_vision, _ = self.cross_attention(
query=vision_proj,
key=text_proj,
value=text_proj
)
# 残差连接
text_output = self.text_norm(text_proj + vision_to_text)
vision_output = self.vision_norm(vision_proj + text_to_vision)
return vision_output, text_output2.3 对齐中的常见问题
1. 对齐模糊性
许多概念在不同模态中有不同的表达方式。例如:
- “快乐”在图像中可能是笑脸,在音频中可能是轻快的语调
- 这种多对一的映射增加了对齐的学习难度
2. 表达粒度不匹配
图像的信息粒度与文本可能不匹配:
- 图像:细粒度的像素级信息
- 文本:抽象的语义级别信息
class GranularityMismatchDetector:
"""
检测和解决粒度不匹配问题
"""
def __init__(self):
self.hierarchy = {
'scene': ['indoor', 'outdoor', 'urban'],
'object': ['person', 'animal', 'vehicle', 'furniture'],
'attribute': ['color', 'size', 'shape', 'texture'],
'action': ['running', 'sitting', 'flying']
}
def align_granularity(self, image_features, text_tokens):
"""
将不同粒度的表示对齐
"""
aligned_features = []
for img_feat in image_features:
# 找到最相关的文本粒度
best_match = None
best_score = -float('inf')
for level, keywords in self.hierarchy.items():
for keyword in keywords:
if keyword in text_tokens:
# 计算相似度
score = self.compute_similarity(
img_feat, self.get_keyword_embedding(keyword)
)
if score > best_score:
best_score = score
best_match = level
# 聚合到匹配粒度
aligned_feat = self.aggregate_to_level(img_feat, best_match)
aligned_features.append(aligned_feat)
return aligned_features三、模态丢失问题
3.1 Modal Collapse的定义与表现
**模态丢失(Modal Collapse)**是指多模态模型在训练或推理过程中”忘记”或忽略某些模态,只专注于某一模态的现象。
具体表现:
| 现象 | 描述 |
|---|---|
| 视觉忽略 | 模型只看文本,忽略图像信息 |
| 文本忽略 | 图像到文本任务中,只生成通用描述 |
| 响应退化 | 所有模态输入都产生类似的输出 |
| 能力丧失 | 某模态的处理能力显著下降 |
3.2 Modal Collapse的成因分析
1. 优化目标的偏向性
在联合训练中,某些模态可能因为更容易优化而被过度关注:
class ModalityImbalanceAnalyzer:
"""
分析模态不平衡问题
"""
def __init__(self, model):
self.model = model
self.gradients_history = {'vision': [], 'text': [], 'audio': []}
def analyze_gradients(self, batch, modality='all'):
"""
分析各模态的梯度流向
"""
outputs = self.model(batch)
loss = outputs['loss']
loss.backward()
modality_gradients = {}
for name, param in self.model.named_parameters():
if 'vision' in name:
modality_gradients.setdefault('vision', []).append(
param.grad.norm().item()
)
elif 'text' in name:
modality_gradients.setdefault('text', []).append(
param.grad.norm().item()
)
elif 'audio' in name:
modality_gradients.setdefault('audio', []).append(
param.grad.norm().item()
)
# 统计各模态的平均梯度
stats = {}
for mod, grads in modality_gradients.items():
stats[mod] = {
'mean': np.mean(grads),
'std': np.std(grads),
'max': np.max(grads)
}
# 检测不平衡
max_grad = max(stats[m]['mean'] for m in stats)
for mod in stats:
balance_ratio = stats[mod]['mean'] / max_grad
stats[mod]['balance_ratio'] = balance_ratio
return stats
def detect_collapse_risk(self, stats, threshold=0.1):
"""
检测模态丢失风险
"""
risks = {}
for mod, stat in stats.items():
if stat['balance_ratio'] < threshold:
risks[mod] = {
'status': 'HIGH_RISK',
'ratio': stat['balance_ratio'],
'suggestion': f'增加{mod}模态的训练权重'
}
elif stat['balance_ratio'] < 0.3:
risks[mod] = {
'status': 'MODERATE_RISK',
'ratio': stat['balance_ratio'],
'suggestion': f'监控{mod}模态的学习曲线'
}
else:
risks[mod] = {
'status': 'OK',
'ratio': stat['balance_ratio']
}
return risks2. 表示空间的坍缩
当不同模态的表示没有有效分离时,可能发生空间坍缩:
class RepresentationCollapseDetector:
"""
检测表示空间坍缩
"""
def __init__(self):
self.feature_history = []
def compute_modality_diversity(self, features_dict):
"""
计算模态多样性指标
"""
# 各模态特征的统计
stats = {}
for modality, features in features_dict.items():
stats[modality] = {
'mean': features.mean(dim=0),
'std': features.std(dim=0),
'covariance': torch.cov(features.T)
}
# 计算模态间的区分度
diversities = []
modalities = list(stats.keys())
for i in range(len(modalities)):
for j in range(i+1, len(modalities)):
# 各模态中心的距离
center_dist = torch.norm(
stats[modalities[i]]['mean'] - stats[modalities[j]]['mean']
).item()
# 各模态的方差
variance = (stats[modalities[i]]['std'].mean() +
stats[modalities[j]]['std'].mean()) / 2
# 多样性指标:距离/方差
diversity = center_dist / (variance + 1e-8)
diversities.append(diversity)
avg_diversity = np.mean(diversities)
return {
'avg_diversity': avg_diversity,
'min_diversity': min(diversities),
'max_diversity': max(diversities),
'collapse_risk': avg_diversity < 1.5 # 经验阈值
}3.3 Modal Collapse的缓解策略
class ModalityBalancer:
"""
模态平衡器:防止Modal Collapse
"""
def __init__(self, model, modality_weights=None):
self.model = model
# 默认等权重
self.modality_weights = modality_weights or {
'vision': 1.0,
'text': 1.0,
'audio': 1.0
}
def balanced_loss(self, outputs, targets, modality_contributions):
"""
计算平衡后的损失
"""
base_loss = F.cross_entropy(outputs, targets)
# 计算各模态的贡献
contributions = modality_contributions
# 动态调整权重
adjusted_weights = {}
for mod, weight in self.modality_weights.items():
# 如果某模态贡献过高,降低其权重
if contributions[mod] > sum(contributions.values()) / len(contributions):
adjusted_weights[mod] = weight * 0.8
# 如果贡献过低,提升权重
elif contributions[mod] < sum(contributions.values()) / (len(contributions) * 2):
adjusted_weights[mod] = weight * 1.2
else:
adjusted_weights[mod] = weight
# 加权组合
balanced = base_loss * sum(adjusted_weights.values()) / len(adjusted_weights)
return balanced
class ModalityDropout:
"""
随机模态丢弃:增强模型对模态缺失的鲁棒性
"""
def __init__(self, dropout_prob=0.1):
self.dropout_prob = dropout_prob
def forward(self, vision_features, text_features, audio_features=None):
"""
随机丢弃某些模态进行训练
"""
batch_size = vision_features.shape[0]
device = vision_features.device
# 随机决定丢弃哪些模态
keep_vision = torch.rand(batch_size, device=device) > self.dropout_prob
keep_text = torch.rand(batch_size, device=device) > self.dropout_prob
# 应用掩码
vision_masked = vision_features * keep_vision.unsqueeze(-1).float()
text_masked = text_features * keep_text.unsqueeze(-1).float()
if audio_features is not None:
keep_audio = torch.rand(batch_size, device=device) > self.dropout_prob
audio_masked = audio_features * keep_audio.unsqueeze(-1).float()
else:
audio_masked = None
return vision_masked, text_masked, audio_masked四、视觉Token效率问题
4.1 视觉信息的Token化挑战
将图像转换为Token序列是多模态模型的关键步骤,但面临效率与信息保留的权衡。
现有方案对比:
| 方案 | Token数量 | 信息保留度 | 计算效率 |
|---|---|---|---|
| 原始像素 | ~65K (224x224) | 100% | 极低 |
| 固定网格(Patches) | 256-1024 | ~90% | 高 |
| 动态网格 | 可变 | ~95% | 中 |
| 层级压缩 | 64-256 | ~85% | 很高 |
| 语义分割 | 可变 | ~80% | 中 |
4.2 视觉Token优化技术
class AdaptiveVisualTokenization(nn.Module):
"""
自适应视觉Token化
根据图像内容动态分配Token数量
"""
def __init__(self, base_patches=16, max_patches=1024):
super().__init__()
self.base_patches = base_patches
self.max_patches = max_patches
self.patch_encoder = PatchEncoder()
self.importance_scorer = ImportanceScorer()
self.merging_layer = TokenMergingLayer()
def forward(self, image, target_tokens=None):
"""
image: [batch, channels, height, width]
"""
# 初始分块
patches = self.extract_patches(image, patch_size=16)
# patches: [batch, num_patches, patch_dim]
# 计算每个patch的重要性
importances = self.importance_scorer(patches)
# 根据重要性决定保留哪些patch
if target_tokens is None:
# 自适应决定token数量
num_tokens = self.decide_token_count(importances)
else:
num_tokens = target_tokens
# 重要性抽检
selected_indices = self.importance_sampling(importances, num_tokens)
selected_patches = patches[:, selected_indices]
# 编码选中的patch
encoded = self.patch_encoder(selected_patches)
# 如果需要,合并相邻tokens
if encoded.shape[1] > num_tokens:
encoded = self.merging_layer(encoded, target_tokens=num_tokens)
return encoded, selected_indices, importances[selected_indices]
def importance_sampling(self, importances, num_tokens):
"""基于重要性的Token选择"""
batch_size = importances.shape[0]
selected = []
for i in range(batch_size):
# Top-k选择
_, indices = torch.topk(importances[i],
k=min(num_tokens, importances.shape[1]))
selected.append(indices)
return selected
def decide_token_count(self, importances):
"""根据图像内容决定Token数量"""
# 计算整体重要性分布
mean_importance = importances.mean(dim=1)
importance_variance = importances.var(dim=1)
# 高方差图像需要更多tokens
base_count = self.base_patches
variance_bonus = (importance_variance > 0.1).float() * self.base_patches
return (base_count + variance_bonus).long().clamp(16, self.max_patches)
class TokenMergingLayer(nn.Module):
"""
Token合并层:将多个tokens合并为一个
"""
def __init__(self, dim):
super().__init__()
self.merging_weights = nn.Linear(dim, dim)
self.fusion = nn.Linear(dim * 2, dim)
def forward(self, tokens, target_tokens):
"""
tokens: [batch, current_tokens, dim]
target_tokens: 目标token数量
"""
if tokens.shape[1] <= target_tokens:
return tokens
current_tokens = tokens.shape[1]
merge_ratio = current_tokens // target_tokens
# 重塑
batch_size = tokens.shape[0]
merged = tokens.view(batch_size, target_tokens, merge_ratio, -1)
# 聚合
merged_mean = merged.mean(dim=2)
merged_max = merged.max(dim=2).values
# 融合
fused = self.fusion(torch.cat([merged_mean, merged_max], dim=-1))
return fused五、音频理解局限
5.1 音频信息的特殊性
音频信号与视觉、文本有着本质的不同:
时序特性:音频是连续的时间序列,与离散的文本和空间化的图像不同
多尺度结构:
- 语义级:词汇、句子、段落
- 声学级:音素、音节、语调
- 声学场景:背景音、音乐、噪声
class AudioFeatureExtractor:
"""
多尺度音频特征提取
"""
def __init__(self, sample_rate=16000):
self.sample_rate = sample_rate
# 短时特征提取器
self.mel_spectrogram = MelSpectrogram(
sample_rate=sample_rate,
n_fft=2048,
hop_length=512,
n_mels=80
)
# Wav2Vec特征提取
self.wav2vec = Wav2Vec2Model.from_pretrained('facebook/wav2vec2-base')
# HuBERT特征提取
self.hubert = HuBERTModel.from_pretrained('facebook/hubert-base-ls960')
def extract_multi_scale_features(self, audio):
"""
提取多尺度音频特征
"""
# 1. 声学级特征
mel_spec = self.mel_spectrogram(audio)
# 2. 语义级特征(Wav2Vec)
wav2vec_features = self.wav2vec(audio).last_hidden_state
# 3. 隐式语义表示(HuBERT)
hubert_features = self.hubert(audio).last_hidden_state
return {
'mel_spectrogram': mel_spec, # [time, freq]
'wav2vec': wav2vec_features, # [time, dim]
'hubert': hubert_features # [time, dim]
}
def align_features(self, features, target_length):
"""
将不同尺度的特征对齐到统一长度
"""
aligned = {}
for name, feat in features.items():
# 计算时间步比例
source_len = feat.shape[1]
scale = source_len / target_length
if scale > 1:
# 下采样
feat = F.avg_pool1d(
feat.transpose(1, 2),
kernel_size=int(scale),
stride=int(scale)
).transpose(1, 2)
elif scale < 1:
# 上采样
feat = F.interpolate(
feat.transpose(1, 2),
size=target_length,
mode='linear',
align_corners=False
).transpose(1, 2)
aligned[name] = feat
return aligned5.2 音频-视觉联合理解
class AudioVisualModel(nn.Module):
"""
音频-视觉联合理解模型
"""
def __init__(self, vision_dim, audio_dim, hidden_dim):
super().__init__()
# 音频编码器
self.audio_encoder = nn.Sequential(
nn.Linear(audio_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim)
)
# 视觉编码器
self.vision_encoder = nn.Sequential(
nn.Linear(vision_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim)
)
# 跨模态注意力
self.cross_attention = CrossModalAttention(
hidden_dim, hidden_dim, hidden_dim
)
# 对齐目标检测
self.alignment_predictor = nn.Linear(hidden_dim, 1)
def detect_sound_source(self, audio, video_frames):
"""
检测音频对应的视觉来源
"""
# 编码
audio_feat = self.audio_encoder(audio) # [batch, time, dim]
video_feat = self.vision_encoder(video_frames) # [batch, frames, dim]
# 跨模态注意力
video_aligned, audio_aligned = self.cross_attention(
video_feat, audio_feat
)
# 预测对齐分数
alignment_scores = self.alignment_predictor(
video_aligned * audio_aligned.mean(dim=1, keepdim=True)
)
return {
'alignment_scores': alignment_scores,
'audio_features': audio_aligned,
'video_features': video_aligned
}六、跨模态推理能力
6.1 跨模态推理的挑战
真正的跨模态推理需要模型能够:
- 理解各模态的独立语义
- 识别模态间的对应关系
- 基于多模态信息进行联合推理
- 处理模态冲突和互补
class CrossModalReasoner(nn.Module):
"""
跨模态推理器
"""
def __init__(self, vision_dim, text_dim, audio_dim, hidden_dim):
super().__init__()
# 各模态编码器
self.vision_encoder = ModalEncoder(vision_dim, hidden_dim)
self.text_encoder = ModalEncoder(text_dim, hidden_dim)
self.audio_encoder = ModalEncoder(audio_dim, hidden_dim)
# 模态交互层
self.interaction_layers = nn.ModuleList([
ModalityInteraction(hidden_dim)
for _ in range(3)
])
# 推理层
self.reasoning_layers = nn.ModuleList([
ReasoningLayer(hidden_dim)
for _ in range(2)
])
# 输出层
self.output_projection = nn.Linear(hidden_dim, hidden_dim)
def forward(self, vision, text, audio, task='qa'):
"""
跨模态推理
"""
# 1. 各模态独立编码
v_feat = self.vision_encoder(vision)
t_feat = self.text_encoder(text)
a_feat = self.audio_encoder(audio)
# 2. 模态交互
fused = self.fuse_modalities(v_feat, t_feat, a_feat)
# 3. 跨模态推理
reasoning_output = fused
for layer in self.reasoning_layers:
reasoning_output = layer(reasoning_output,
[v_feat, t_feat, a_feat])
# 4. 任务特定输出
output = self.output_projection(reasoning_output)
return output
class ModalityInteraction(nn.Module):
"""
模态交互层
"""
def __init__(self, dim):
super().__init__()
self.v_t_attention = CrossAttention(dim)
self.v_a_attention = CrossAttention(dim)
self.t_a_attention = CrossAttention(dim)
self.fusion = nn.Linear(dim * 3, dim)
def fuse_modalities(self, v, t, a):
"""
融合三个模态的信息
"""
# 成对交互
v_t = self.v_t_attention(t, v) # text attends to vision
v_a = self.v_a_attention(a, v)
t_a = self.t_a_attention(a, t)
# 聚合
fused = torch.cat([v + v_t, t + t_a, a + v_a], dim=-1)
fused = self.fusion(fused)
return fused
class ReasoningLayer(nn.Module):
"""
推理层:基于多模态信息进行推理
"""
def __init__(self, dim):
super().__init__()
self.query_projection = nn.Linear(dim, dim)
self.key_projection = nn.Linear(dim, dim)
self.value_projection = nn.Linear(dim, dim)
self.reasoning_gate = nn.Linear(dim * 2, dim)
def forward(self, context, modality_features):
"""
context: 当前推理状态
modality_features: 各模态的特征列表
"""
# 查询来自当前状态
q = self.query_projection(context)
# 聚合各模态的key和value
keys = torch.stack([self.key_projection(m) for m in modality_features])
values = torch.stack([self.value_projection(m) for m in modality_features])
# 跨模态注意力
attended = []
for k, v in zip(keys, values):
attn = torch.softmax(q @ k.transpose(-2, -1), dim=-1)
attended.append(attn @ v)
aggregated = torch.stack(attended).mean(dim=0)
# 门控更新
gate = torch.sigmoid(self.reasoning_gate(
torch.cat([context, aggregated], dim=-1)
))
new_context = gate * aggregated + (1 - gate) * context
return new_context七、多模态幻觉问题
7.1 多模态幻觉的特殊性
多模态幻觉比纯文本幻觉更加复杂:
1. 跨模态不一致
- 图像显示狗,但文本描述为猫
- 视频内容与音频描述不匹配
2. 视觉幻觉放大
- 图像模糊时,模型可能”脑补”不存在的细节
- 选择性关注导致遗漏重要信息
3. 组合幻觉
- 各模态单独正确,但组合后错误
class MultimodalHallucinationDetector:
"""
多模态幻觉检测器
"""
def __init__(self, vision_model, text_model):
self.vision_model = vision_model
self.text_model = text_model
self.fusion_model = FusionModel()
def detect_inconsistency(self, image, generated_text):
"""
检测图像与生成文本的不一致性
"""
# 图像理解
image_features = self.vision_model.extract_features(image)
image_entities = self.vision_model.detect_entities(image)
# 文本理解
text_features = self.text_model.encode(generated_text)
text_entities = self.text_model.extract_entities(generated_text)
# 实体匹配
inconsistencies = []
for img_ent in image_entities:
matched = False
for txt_ent in text_entities:
if self.is_compatible(img_ent, txt_ent):
matched = True
break
if not matched:
inconsistencies.append({
'image_entity': img_ent,
'issue': 'not_mentioned',
'confidence': img_ent.confidence
})
# 检查文本中提及但图像中不存在的实体
for txt_ent in text_entities:
matched = False
for img_ent in image_entities:
if self.is_compatible(img_ent, txt_ent):
matched = True
break
if not matched:
inconsistencies.append({
'text_entity': txt_ent,
'issue': 'hallucinated',
'confidence': txt_ent.confidence
})
return inconsistencies
def is_compatible(self, entity1, entity2):
"""
判断两个实体是否兼容
"""
# 类型兼容性
type_compatible = entity1.type == entity2.type
# 语义相似度
similarity = F.cosine_similarity(
entity1.embedding.unsqueeze(0),
entity2.embedding.unsqueeze(0)
).item()
return type_compatible and similarity > 0.7八、相关主题链接
- 幻觉问题深度解析 - 多模态幻觉的具体表现
- 幻觉缓解策略 - 跨模态一致性约束方法
- 上下文窗口限制 - 多模态信息的上下文管理
- AI_Agent系统复杂性 - 多模态Agent的设计挑战
- 评估基准失效问题 - 多模态评估的困难