您现在的位置是:主页 > news > 新加坡政府网站建设的特点及借鉴意义/代写
新加坡政府网站建设的特点及借鉴意义/代写
admin2025/5/1 2:22:05【news】
简介新加坡政府网站建设的特点及借鉴意义,代写,网站策划书包括哪几个步骤,厦门网页制作厦门小程序app我在使用SIFT特征检测器进行实时对象匹配时遇到了一些问题。 这是我的视频解决方案。首先,我创建了一个结构来存储匹配的关键点。该结构包含templateImage中关键点的位置,inputImage中关键点的位置和相似性度量。这里我将向量的互相关用作相似性度量。st…
我在使用SIFT特征检测器进行实时对象匹配时遇到了一些问题。 这是我的视频解决方案。
首先,我创建了一个结构来存储匹配的关键点。该结构包含templateImage中关键点的位置,inputImage中关键点的位置和相似性度量。这里我将向量的互相关用作相似性度量。
struct MatchedPair
{
Point locationinTemplate;
Point matchedLocinImage;
float correlation;
MatchedPair(Point loc)
{
locationinTemplate=loc;
}
}
我将根据匹配的关键点的相似性选择排序方式,因此我需要一个帮助程序函数,该函数将告诉std::sort()如何比较我的MatchedPair对象。
bool comparator(MatchedPair a,MatchedPair b)
{
return a.correlation>b.correlation;
}
现在,主要代码开始。 我使用标准方法从输入图像和templateImage中检测和解密特征。计算特征后,我实现了自己的匹配功能。 这是您要寻找的答案
int main()
{
Mat templateImage = imread("template.png",IMREAD_GRAYSCALE); // read a template image
VideoCapture cap("input.mpeg");
Mat frame;
vector InputKeypts,TemplateKeypts;
SiftFeatureDetector detector;
SiftDescriptorExtractor extractor;
Mat InputDescriptor,templateDescriptor,result;
vector mpts;
Scalar s;
cap>>frame;
cvtColor(image,image,CV_BGR2GRAY);
Mat outputImage =Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1);
detector.detect(templateImage,TemplateKeypts); // detect template interesting points
extractor.compute(templateImage,TemplateKeypts,templateDescriptor);
while( true)
{
mpts.clear(); // clear for new frame
cap>>frame; // read video to frame
outputImage=Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1); // create output image
cvtColor(frame,frame,CV_BGR2GRAY);
detector.detect(frame,InputKeypts);
extractor.compute(frame,InputKeypts,InputDescriptor); // detect and descrypt frames features
/*
So far we have computed descriptors for template and current frame using traditional methods
From now onward we are going to implement our own match method
- Descriptor matrixes are by default have 128 colums to hold features of a keypoint.
- Each row in descriptor matrix represent 128 feature of a keypoint.
Match methods are using this descriptor matrixes to calculate similarity.
My approach to calculate similarity is using cross correlation of keypoints descriptor vector.Check code below to see how I achieved.
*/
// Iterate over rows of templateDesciptor ( for each keypoint extracted from // template Image) i keypoints in template,j keypoints in input
for ( int i=0;i
{
mpts.push_back(MatchedPair(TemplateKeypts[i].pt));
mpts[i].correlation =0;
for ( int j=0;j
{
matchTemplate(templateDescriptor.row(i),InputDescriptor.row(j),result,CV_TM_CCOR_NORMED);
// I have used opencvs built function to calculate correlation.I am calculating // row(i) of templateDescriptor with row(j) of inputImageDescriptor.
s=sum(result); // sum is correlation of two rows
// Here I am looking for the most similar row in input image.Storing the correlation of best match and matchLocation in input image.
if(s.val[0]>mpts[i].correlation)
{
mpts[i].correlation=s.val[0];
mpts[i].matchedLocinImage=InputKeypts[j].pt;
}
}
}
// I would like to show template,input and matching lines in one output. templateImage.copyTo(outputImage(Rect(0,0,templateImage.cols,templateImage.rows)));
frame.copyTo(outputImage(Rect(templateImage.cols,templateImage.rows,frame.cols,frame.rows)));
// Here is the matching part. I have selected 4 best matches and draw lines // between them. You should check for correlation value again because there can // be 0 correlated match pairs.
std::sort(mpts.begin(),mpts.end(),comparator);
for( int i=0;i<4;i++)
{
if ( mpts[i].correlation>0.90)
{
// During drawing line take into account offset of locations.I have added
// template image to upper left of input image in output image.
cv::line(outputImage,mpts[i].locationinTemplate,mpts[i].matchedLocinImage+Point(templateImage.cols,templateImage.rows),Scalar::all(255));
}
}
imshow("Output",outputImage);
waitKey(33);
}
}