您现在的位置是:主页 > news > wordpress添加短代码/合肥品牌seo

wordpress添加短代码/合肥品牌seo

admin2025/5/1 4:45:03news

简介wordpress添加短代码,合肥品牌seo,java wap网站开发教程,沈阳网红餐厅github:iTomxy/data/nuswide NUS-WIDE[1]是多标签数据集,看到几篇都是类似 [1] 的划分方式:每个类随机选 100 个造成 query set。感觉有些谜,问 DCMH 作者,见 [3]。 现在的策略是:按类来抽,保证…

wordpress添加短代码,合肥品牌seo,java wap网站开发教程,沈阳网红餐厅github:iTomxy/data/nuswide NUS-WIDE[1]是多标签数据集,看到几篇都是类似 [1] 的划分方式:每个类随机选 100 个造成 query set。感觉有些谜,问 DCMH 作者,见 [3]。 现在的策略是:按类来抽,保证…
  • github:iTomxy/data/nuswide

NUS-WIDE[1]是多标签数据集,看到几篇都是类似 [1] 的划分方式:每个类随机选 100 个造成 query set。感觉有些谜,问 DCMH 作者,见 [3]。
现在的策略是:按类来抽,保证每个类的样本数据,而且不放回,保证不重复。(莫非本来就是这个意思?)
我还根据每个类的样本数,从少到多地选,虽然现在看来似乎没必要。
用的数据也是 DCMH 作者提供的,见 [3] 所在 repo。或见 [4]。

Code

spliting

对于 semi-supervised:training set = labeled part,labeled + unlabeled = retrieval set。
test set 跟 query set 同义。

import numpy as np
import scipy.io as sio
import os
from os.path import join
import time
import matplotlib.pyplot as pltnp.random.seed(int(time.time()))# 读 label 数据
NUSWIDE = "/usr/local/dataset/nuswide-tc21"
labels = sio.loadmat(join(NUSWIDE, "nus-wide-tc21-lall.mat"))["LAll"]
print(labels.shape)  # (195834, 21)N_CLASS = labels.shape[1]
N_SAMPLE = labels.shape[0]
TEST_PER = 100  # test set 每个类 100 个
TRAIN_PER = 500  # training set 每个类 500 个
N_TEST = TEST_PER * N_CLASS
N_TRAIN = TRAIN_PER * N_CLASS"""1. 先保证 test set 的每类至少 100"""
indices = list(range(N_SAMPLE))  # 全部索引
np.random.shuffle(indices)cls_sum = np.sum(labels[indices], axis=0)  # 统计每个类样本数
#print(cls_sum)
classes = np.argsort(cls_sum)  # 从少到多
#print(classes)id_test = []
cnt = np.zeros_like(labels[0], dtype=np.int32)  # 默认 int8,会爆
for cls in classes:print("--- {} ---".format(cls))for i in indices:if cnt[cls] >= TEST_PER:  # 此类已抽够breakif labels[i][cls] == 1:id_test.append(i)cnt += labels[i]#print(cnt)assert cnt[cls] >= TEST_PER  # 讲道理一趟下来是肯定够的indices = list(set(indices) - set(id_test))  # 去掉已抽部分的 idnp.random.shuffle(indices)#print("left:", len(indices))assert len(set(id_test)) == len(id_test)  # 验证没有重复
#print("cnt:", cnt)
print("#test:", len(id_test))"""2. 类似地,保证 training set 的每类至少 500"""
indices = list(set(indices) - set(id_test))  # 去掉刚才选过的那些 test id
np.random.shuffle(indices)
print(len(indices))cls_sum = np.sum(labels[indices], axis=0)
#print(cls_sum)
classes = np.argsort(cls_sum)
#print(classes)id_train = []
cnt = np.zeros_like(labels[0], dtype=np.int32)
for cls in classes:print("--- {} ---".format(cls))for i in indices:if cnt[cls] >= TRAIN_PER:breakif labels[i][cls] == 1:id_train.append(i)cnt += labels[i]#print(cnt)assert cnt[cls] >= TRAIN_PERindices = list(set(indices) - set(id_train))np.random.shuffle(indices)#print("left:", len(indices))assert len(set(id_train)) == len(id_train)
#print("cnt:", cnt)
print("#train:", len(id_train))"""3. 补足 test 和 training set 剩余的部分"""
indices = list(set(indices) - set(id_train))  # 再去掉刚才选过的 train id
np.random.shuffle(indices)
#print(len(indices))lack_test = N_TEST - len(id_test)
lack_train = N_TRAIN - len(id_train)
print("lack:", lack_test, ",", lack_train)id_test.extend(indices[:lack_test])
id_train.extend(indices[lack_test: lack_test + lack_train])print("#total test:", len(id_test))
print("#total train:", len(id_train))"""4. unlabeled 部分"""
# unlabeled = all - labeled(training) - query(test)
id_unlabeled = list(set(indices) - set(id_train) - set(id_test))
print("#unlabeled:", len(id_unlabeled))"""5. retrieval set"""
id_ret = id_train + id_unlabeled
print("#retrieval:", len(id_ret))"""保存"""
_info = "nuswide-tc21.{}pc.{}pc".format(TEST_PER, TRAIN_PER)
SAV_P = join(NUSWIDE, _info)
if not os.path.exists(SAV_P):os.makedirs(SAV_P)test_id = np.asarray(id_test)
labeled_id = np.asarray(id_train)
unlabeled_id = np.asarray(id_unlabeled)
ret_id = np.asarray(id_ret)np.save(join(SAV_P, "idx_test.npy"), test_id)
np.save(join(SAV_P, "idx_labeled.npy"), labeled_id)
np.save(join(SAV_P, "idx_unlabeled.npy"), unlabeled_id)
np.save(join(SAV_P, "idx_ret.npy"), ret_id)

image mean

  • 计算两种图像均值:按 pixel 平均、按 channel 平均。
  • 图像来自 nus-wide-tc21-iall.mat,前期将其按 id 分开每幅一个 .npy 文件,放在 image.npy/ 里。
"""计算图像均值:按 pixel、按 channel 两种"""IMAGE_P = join(NUSWIDE, "images.npy")
_img = np.load(join(IMAGE_P, "1.npy"))
mean_pix = np.zeros_like(_img).astype(np.float32)  # [224, 224, 0]
mean_channel = np.zeros([3]).astype(np.float32)for i, idx in enumerate(ret_id):img = np.load(join(IMAGE_P, "{}.npy".format(idx)))mean_pix += imgmean_channel += np.mean(img, (0, 1))if i % 1000 == 0 or i == ret_id.shape[0] - 1:print(i)mean_pix /= ret_id.shape[0]
mean_channel /= ret_id.shape[0]
print("mean channel:", mean_channel)  # [111.84164 107.72994  99.7127 ]np.save(join(SAV_P, "avgpix.{}.npy".format(_info)), mean_pix)
np.save(join(SAV_P, "avgc.{}.npy".format(_info)), mean_channel)

References

  1. NUS-WIDE
  2. Simultaneous Feature Learning and Hash Coding with Deep Neural Networks
  3. details of partition of NUS-WIDE #8
  4. NUS-WIDE数据集预处理