OpenCV LogoAs I’ve mentioned before, my articles about using OpenCV and C# are the most viewed articles on this site. Among those articles, I get more emails asking about using OpenCV and C# for augmented reality applications than I do anything else. It appears that AR is a pretty big topic these days and everyone looking at getting into the field needs a good place to start. So, I’ve put together a small application that uses OpenCV and C# to do augmented reality. As always, I’m using the OpenCvSharp .NET wrapper for OpenCV. But, the same principles that apply here can also be used in pretty much any other wrapper or in OpenCV itself. Usually, I’ll walk thru every line of code in my example applications and explain what I’m doing. But, this time, I’ve decided to just provide you with the code and let you figure it out for yourself. However, as always, I’m more than willing to answer any questions you may have as you go along with the example.

To get started, you will need to download my OpenCV Augmented Reality example application from It already has everything you need to begin your augmented reality application including the OpenCvSharp and OpenCV runtimes which are located in the bin > Debug directory. In that same directory, you will see a file called “chessboard 6×5.jpg”. In order for this example to work, you will need to print a copy of that image onto a typical 8.5×11 piece of paper (scale doesn’t really matter here). Once you’ve printed the chessboard image, go ahead and launch the app by opening the .sln file in Visual C# or by running the AugmentedReality.exe file also found in the bin > Debug folder.

When you run the application, you will see a Windows form that only includes a button with the word “Start” on it and a combo box next to it. The combo box includes options for 1, 2, & 3. Picking number 1 means that the application will look for the checkerboard image in a video feed and will overlay a typical image over it. The image used in this example is a standard JPG file of the OpenCV logo. If you choose number 2 from the combo box, the application will look for the checkerboard in a video feed and will overlay a video over it. For this example, I’ve included a video file called “trailer.avi” which is a trailer for the “Big Buck Bunny” movie created by the Peach Open Movie Project ( If you choose number 3 from the combo box, the application will look for the checkerboard image in a video feed and will draw a box around the portion of the image that is used to overlay the image or video. Here is an example of the application using number 2 to display the “Big Buck Bunny” trailer video over the checkerboard.

Augmented Reality Example Using OpenCV and C#

I’m using a cheapo USB webcam to capture the video and even running on low end laptop, I’m still getting pretty good performance out of the application. The possibilities of technology like this are endless. For example, since OpenCV can be ran on an iPhone, one could easily write an augmented reality iPhone app that can overlay advertisements on top of images when the iPhone is pointed at things like store fronts or billboards. Imagine pointing your phone at a still image in a magazine and having a video commercial play in your screen. You could do that with this kind of technology. Anyways, whatever you decide to use this for, be sure to come back here and share your story with the rest of us in the comments below. I’m extremely curious as to what all kinds of cool stuff you guys can come up with. Until next time, HAPPY CODING!!!

Thank you for your interest in my site. If you find the information provided on this site useful, please consider making a donation to help continue development!

PayPal will open in a new tab.

Related Posts

Tagged with:  

53 Responses to Augmented Reality Using C# and OpenCV

  1. zahiritpro says:

    hello Lucus..I found this tutorial very useful..

    Is it possible for the detection to start only when the object is still for few seconds(i.e object in same position in consecutive frames say 10 frames)…?

    • LuCuS says:

      Yes. You could put a timer that starts counting down when the object is first detected. When the timer runs out, you could begin playing the animation or video.

      • zahiritpro says:

        But how can i find the repeated frames..I want to eliminate unnecessary frames and detect only those which are repeated for few seconds..

        • LuCuS says:

          I’m not sure I understand your question? Are you meaning the “repeated frames” in your camera feed or in the video overlay?

          • zahiritpro says:

            sorry for being unclear….My idea is to detect the sign made using hands..

            I’m capturing frames containing human hand using IplImage* frame = cvQueryFrame( capture );

            I want to detect only those frames in which the hand is showing the same sign..

            Hope i’m better now…

          • LuCuS says:

            Ah. Ok. I understand now. Sign language recognition has been discussed several times on almost every article found in this list: On each of those articles, scroll down to the comments and you’ll see those discussions. I also have a demo app that shows how to use OpenCvSharp to detect hands using the blob method. ( For a while now, I’ve been planning on writing an article showing how to do sign-language / hand recognition with OpenCV, but I’ve been backed up with several other projects and haven’t had the time. Try looking thru some of those other pages, reading thru the comments, and see if that doesn’t get you to where you want to be. I know that’s a lot of stuff to dig thru, but you’ll find a lot of good information there. One of my readers, UtopiaDreamer, has been working on this very thing for quite a while now. So, be sure to pay special attentions to those comments. Also, make sure you checkout my Template Matching article ( That might get you moving in the right direction.

            For a quick explanation to answer your last question, you would need to store a counter that gets incremented each time a particular hand-gesture is detected. Once that counter reaches a certain number, you could do whatever work comes next. If you use something like I show in my Template Matching article in the link above, you could move your hand around as much as you want and the work wouldn’t begin until the hand is displayed in the same pose as in your template image. Word of warning though, doing hand detection is a fairly complex thing to do with OpenCV due to different hand sizes, rotations, lighting, etc… To help with a lot of those issues, I would recommend using some filters such as the Canny filter and ROI (region of interest) to help eliminate unwanted pieces of your video feed and to help with performance. Unlike the Microsoft Kinect, OpenCV relies on only 1 camera whereas the Kinect uses 2 along with some IR (infrared) trickery which allows it to detect depths and help determine the ROI a lot more accurately. But, with that said, it’s still possible to do using 1 camera and OpenCV.

            Let me know if any of this does / does not make sense and we’ll take it from there.

  2. zahiritpro says:

    I’m highly satisfied by your explanation…Thanks for the time….I will follow what you said and report to u …

    In the meantime, when i work with blobs,i’m getting the following error

    “1>testing.obj : error LNK2019: unresolved external symbol _cvFilterByArea referenced in function _main
    1>testing.obj : error LNK2019: unresolved external symbol _cvRenderBlobs referenced in function _main
    1>testing.obj : error LNK2019: unresolved external symbol _cvLabel referenced in function _main”
    But i have referenced the blob library in both project folder and also in the project in vc++…

    What could be t reason ?

  3. Almalevi says:

    Hi LuCus congrats for the explanation, exist one way to load my own 3d model made in blender?? instead of the video?? sorry for my english…

  4. kungfu4000 says:

    do you have a tutorial on how to use the haarcascade.xml file i made?…. i notice you use the chessboardflag from opencvsharp, but i want to use my own template for the project, or is there a way i can add something like chessboardflag to opencvsharp or modify opencvsharp?

  5. christiansinho777 says:

    hello i have tried your application and it works very good , now i am trying to make my own application but i can’t include this : using OpenCvSharp; …… first i tried to copy all the .dll files you included in your application in the folder bin>debug in my own folder but it stills doesn’t work so i would like to know if i have to download it or how to instal it or how to make it work ….this is very important for my …

    • LuCuS says:

      Have you added references to OpenCV in your project? Right click on References in your Solution Explorer and select Add Reference. Click the Browse tab and locate the OpenCvSharp dlls.

  6. christiansinho777 says:

    i am new at this and i want to create a 3d image so, i would aprreciate you can help me with this.
    thanks a lot!

    • LuCuS says:

      I’m in the process of writing an article and example app that shows how to load 3D objects. I’ve been extremely busy with some other things lately, but hope to have time soon to complete it.

  7. masangga says:

    Hi Lucus,

    How if I want to embed/draw 3D object with opencvsharp??

    and any AR library that could work collaborately with opencvsharp?


    • LuCuS says:

      I’ve been working on an article and example project that shows how to load 3D objects using SharpGL. Unfortunately, I’ve been extremely busy and it looks like I’m going to get even busier in the upcoming weeks. So, I’m not sure when I’ll get around to finishing it. But, to answer your question, take a look into SharpGL or DirectX. A while back, I used DirectX to load objects such as VRML into an AR app I created. You can see the video here: Shortly after posting that article, I had a harddrive go bad and didn’t have a backup that included the source for that project. At some point, I want to rewrite that app as well as finish out the app and article showing how to do the same using SharpGL.

  8. masangga says:

    Hi, sorry to disturb you,

    I just wanna ask, how if we combine emgu cv + WPF to make AR system?


  9. carl says:

    How do I save a picture or record video, after it detects the mark.
    Please help me.

  10. rohan1408 says:

    Hi Lucus,
    I downloaded your example but not able to execute it. I am getting error on line # 61
    CvCapture cap = Cv.CreateCameraCapture(1); //Failed to create CvCapture

    Below is the stack trace :
    at OpenCvSharp.CvCapture..ctor(Int32 index)
    at OpenCvSharp.Cv.CreateCameraCapture(Int32 index)
    at AugmentedReality.Form1.Run() in D:\Users\rohan\Desktop\OpenCvAugmentedReality\AugmentedReality\AugmentedReality\Form1.cs:line 62
    at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
    at System.Threading.ThreadHelper.ThreadStart()

    I tried running EXE also but it crashes after few seconds.
    I am using Windows 7 ultimate (64-bit),
    Visual studio 2010 Ultimate

    Please help me in running the code.

  11. hgomez says:

    Hi Lucus, I have found very useful your website. I have searched in the entire web and your articles are really awesome. I would like to develop an application and use on it “background removal” (stay a person in front of the camera and only see his body) but it has been a little hard to find a good article and a way to make it work fine. Do you have any clue about how to do it ? I’ll appreciate your help. Thanks

    • LuCuS says:

      If the person is moving, you can do background subtraction using MOG (mixture of gaussians). Here is some code to get you started.

      using System;
      using System.Collections.Generic;
      using System.Linq;
      using System.Text;
      using OpenCvSharp;
      using OpenCvSharp.CPlusPlus;

      namespace GomezTest
      class BgSubtractorMOG
      public BgSubtractorMOG()
      CvCapture cap = CvCapture.FromCamera(CaptureDevice.Any);
      BackgroundSubtractorMOG mog = new BackgroundSubtractorMOG();
      CvWindow w = new CvWindow("Background Subtraction");
      IplImage img;
      Mat imgFg = new Mat(cap.FrameWidth, cap.FrameHeight, MatrixType.U8C1);
      int currentFrame = 0;
      while (CvWindow.WaitKey(10) < 0) { img = cap.QueryFrame(); mog.Run(new Mat(img, false), imgFg, 0.01); w.Image = imgFg.ToIplImage(); w.Image = img; } } } }

      • hgomez says:

        Thanks for your quick response. Your code is running and I see white pixels in the parts when the camera detects a move. Then, what would I need to do now with it ? How could I create the body contour with it and later replace those white pixels or contour pixels with the camera video ? Any help or clue will be really appreciate. Thanks Lucus

      • muhusin says:

        Dear Lucus,
        Its a wonderful and great contribution you are doing…Highly impressed and My Salute for you…

        When I try the above code,BackgroundSubtractorMOG this is not a valid class in my build of OpenCvSharp.CPlusPlus,
        I have tried this code in your sample code posted in ‘Augmented Reality Using C# and OpenCV’ (2011). Need your great help…Also would need your help on placing 3D models.

  12. osiel says:

    Hi man, i’m impressed with your job. I’m very new in this and I want to ask if you can show me or post some examples of how to recognize different markers. Thank you very much.

    • LuCuS says:

      Thanks. One of the easiest ways to do that is using TemplateMatching as shown here: However, you’ll have to add a lot more code than what I provided in that article. If you plan to achieve something like this:, you should look into using something like the SURF algorithm. OpenCV provides a nice method for surf that allows you to extract key points from an object and check for the occurrence of those same points in other images. The first thing you will need to do is to get an array of the key points in the marker you want to look for. Then, using a while-loop, you will want to iterate the frames from your video / camera and repeat the same code on each frame. After that, you can compare the key points from your marker against the key points in your frame and voila. Here is some quick code to get you started.

      IplImage hiro = Cv.LoadImage("hiro.jpg", LoadMode.GrayScale);
      CvMemStorage storage = Cv.CreateMemStorage(0);

      CvSeq objectKeypoints;
      CvSeq objectDescriptors;

      CvSURFParams param = new CvSURFParams(500, true);
      Cv.ExtractSURF(hiro, null, out objectKeypoints, out objectDescriptors, storage, param);
      CvPoint[] srcCorners = new CvPoint[4] { new CvPoint(0, 0), new CvPoint(hiro.Width, 0), new CvPoint(hiro.Width, hiro.Height), new CvPoint(0, hiro.Height) };

      And here is the image used as the marker.

      Hiro Computer Vision Marker

    • osiel says:

      Hi again, sorry to bother but don’t you have any example like you did in this, but with different markers?
      I have tried a lot with the example you gave me, but i dont get it well.

  13. aft1972 says:


    Thanks for your work

    i am trying to build a fashion augmented reality application

    but i have 2 problems:
    1. the color of the photo you display on the chess board is mixed with the background color. how can i made the photo displayed as solid.

    2. when i put the chess board on my stomach i want the dress photo to cover all of my body centered at the chess board. is that available??


  14. aft1972 says:

    hello Lucus

    Is there any English documentation for “OpenCvSharp .NET wrapper for OpenCV”??


    • LuCuS says:

      I’m not aware of any documentation that is purely in English. In fact, there isn’t a whole lot of documentation at all. When I’m working with OpenCvSharp, I refer to the OpenCV documentation ( and manually workout the corresponding code in OpenCvSharp.

      Because of the lack of documentation, I wrote a book that explains everything ( I completed the book around the beginning of 2012. Unfortunately I have been so busy over the last year that I haven’t had time to complete the editing process with the company that has agreed to publish the book. At some point I have to make time to complete the editing process or my publisher might cancel our agreement. πŸ™

  15. Warat says:

    Hello Lucus
    Thanks for your work.
    I try to run your source code and I found a problem.
    Program show black screen but the light of webcam is still on.
    How can I fix this problem?
    (Run on Windows 8.1)

  16. Nimesh says:

    Hello Lucas i downloaded your code but i am too getting blank screen.Sad i wanted to show my college friends this.i am using dell laptop E5440

  17. henrikl says:

    Hi LuCuS,

    Thank you for sharing. I love your articles. Please keep up the good work.

    I have downloaded your example Augmented Reality Using C# and OpenCV it works well for about 10 seconds and then I get an error at the line ”neg_img = Cv.CreateImage(Cv.GetSize(img), BitDepth.U8, 3);”
    saying β€œAn unhandled exception of type ‘OpenCvSharp.OpenCVException’ occurred in OpenCvSharp.dll, Additional information: Failed to allocate 921600 bytes”

    I converted your C# example in VB.Net and again after about 10 seconds the error appeared in the line β€œdisp = Cv.CreateImage(Cv.GetSize(img), BitDepth.U8, 3)”

    I have tried in debug mode Visual Studio 2010 and 2013 and with a compiled exe file. Same result.

    My computer is equipped with an Intel Core2 processor and with 4 GB RAM, running Windows 8.1 64 bit, my Webcam is a Microsoft LifeCam Cinema.

    By the way I have rebooted my machine and only running program is Visual Studio.

    Thank you in advance,

Leave a Reply