【高分求解】如何用EmguCV的Surf算法?

gqqnb 2013-01-17 08:19:44
文末的代码是EmguCV里用surf算法的示例。看起来indices保存了匹配信息。因为

Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox.KeypointDrawType.DEFAULT);


就把两幅图画了出来,标上了特征点,而且在匹配的特征点间画上连线。如图,白色的圈就是找到的特征点,有两对特征点相匹配。



请问,除了把特征点匹配用Features2DToolbox.DrawMatches话出来以外,怎么用代码显示有多少匹配,谁匹配谁。谢谢!


       /// <summary>
/// Draw the model image and observed image, the matched features and homography projection.
/// </summary>
/// <param name="modelImageFileName">The model image</param>
/// <param name="observedImageFileName">The observed image</param>
/// <param name="matchTime">The output total time for computing the homography matrix.</param>
/// <returns>The model image and observed image, the matched features and homography projection.</returns>
public static Image<Bgr, Byte> Draw(String modelImageFileName, String observedImageFileName, out long matchTime)
{
Image<Gray, Byte> modelImage = new Image<Gray, byte>(modelImageFileName);
Image<Gray, Byte> observedImage = new Image<Gray, byte>(observedImageFileName);
Stopwatch watch;
HomographyMatrix homography = null;

SURFDetector surfCPU = new SURFDetector(500, false);
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Matrix<int> indices;

Matrix<byte> mask;
int k = 2;
double uniquenessThreshold = 0.8;

//extract features from the object image
modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);

watch = Stopwatch.StartNew();

// extract features from the observed image
observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
if (observedKeyPoints.Size == 0)
throw new Exception();

Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
matcher.Add(modelDescriptors);

indices = new Matrix<int>(observedDescriptors.Rows, k);
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}

int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}

watch.Stop();

//Draw the matched keypoints
Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox.KeypointDrawType.DEFAULT);





#region draw the projected region on the image
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);

result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);
}
#endregion

matchTime = watch.ElapsedMilliseconds;

return result;
}
...全文
359 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
@daviiid 2014-03-13
  • 打赏
  • 举报
回复
Console.WriteLine(" matched points : " + nonZeroCount);
jiaoshiyao 2013-10-15
  • 打赏
  • 举报
回复
嗯嗯 此题无解 Over
songgongpu002 2013-10-15
  • 打赏
  • 举报
回复
楼主是否己解决阿?偶也要解决类似问题。
「已注销」 2013-01-18
  • 打赏
  • 举报
回复
介个要把斑竹闷叫出来 野比也叫来 还有几位高手通通叫来研究研究
嘴哥臭鼬 2013-01-17
  • 打赏
  • 举报
回复

111,125

社区成员

发帖
与我相关
我的任务
社区描述
.NET技术 C#
社区管理员
  • C#
  • Creator Browser
  • by_封爱
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告

让您成为最强悍的C#开发者

试试用AI创作助手写篇文章吧