Turtle Beach PC Gaming HeadsetAim Sports Scope with Green LaserMonster Inspiration Headphones

OpenCV LogoA while back, I wrote a few articles about using OpenCV with C#. One of those articles was titled “OpenCV Eye Tracking with C#”. However, I had titled that article wrong. The article was actually about “head” tracking with OpenCV and C#. I just now caught that mistake and have corrected it. But, to make up for the mistake, I will now show you how to do what that title originally said. I will show you how to do “eye” tracking with C# and OpenCV. If you read the article about “OpenCV Head Tracking with C#“, this article will be a breeze for you as it only requires a couple of minor changes to the head tracking application from the last article.

Tracking eyes with OpenCV and C# is identical to head tracking. So, go ahead and read OpenCV Head Tracking with C# if you haven’t done so already. At the beginning of the code, you will notice variables for Scale, ScaleFactor, and MinNeighbors. As I mentioned in the head tracking article, you can alter these variables to get different results. Since we’re going to be tracking 2 eyes (right & left) and since they are smaller than a head, we need to modify the Scale and MinNeighbors variables. Start with the Scale variable. Drop it down to the neighborhood of 1.25. This will shrink the size of the circles that will be drawn on the screen and will also tell OpenCV to be expecting the objects being tracked to be a little smaller than what it expected for head tracking. Next, since we’re looking for 2 objects instead of 1, go ahead and change the MinNeighbors variable to 2.

The next part is just as easy as the last. In the head tracking article, you will have noticed that we used a cascade file that was provided to us in the OpenCV installation. Since we were tracking the head in that article, we went with the haarcascade_frontalface_alt2.xml cascade. For tracking eyes, you simply need to change this path to point the the haarcascade_eye.xml cascade instead. While you’re at it, you’ll probably want to go ahead and change all “face” references to say “eyes” instead, but that step isn’t completely necessary if all you’re doing is playing anyways.

That’s it! With those few changes, you can now track eyes using OpenCV and C#. If everything went correctly (and you already had the head tracking app working), you should see something like this:

OpenCV Eye Tracking

In case you don’t want to go back and look at my head tracking article for the code, here it is for eye tracking. Simply copy and paste this code as EyeDetect.cs. Then, in your Program.cs, construct a new instance of EyeDetect by using “EyeDetect ed = new EyeDetect();“.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using OpenCvSharp;

namespace EdgeDetect
{
    class EyeDetect
    {
        public EyeDetect()
        {
            CvColor[] colors = new CvColor[]{
                new CvColor(0,0,255),
                new CvColor(0,128,255),
                new CvColor(0,255,255),
                new CvColor(0,255,0),
                new CvColor(255,128,0),
                new CvColor(255,255,0),
                new CvColor(255,0,0),
                new CvColor(255,0,255),
            };

            const double Scale = 1.25;
            const double ScaleFactor = 2.5;
            const int MinNeighbors = 2;

            using (CvCapture cap = CvCapture.FromCamera(2))
            using (CvWindow w = new CvWindow("Eye Tracker"))
            {
                while (CvWindow.WaitKey(10) < 0)
                {
                    using (IplImage img = cap.QueryFrame())
                    using (IplImage smallImg = new IplImage(new CvSize(Cv.Round(img.Width / Scale), Cv.Round(img.Height / Scale)), BitDepth.U8, 1))
                    {
                        
                        using (IplImage gray = new IplImage(img.Size, BitDepth.U8, 1))
                        {
                            Cv.CvtColor(img, gray, ColorConversion.BgrToGray);
                            Cv.Resize(gray, smallImg, Interpolation.Linear);
                            Cv.EqualizeHist(smallImg, smallImg);
                        }
                        
                        using (CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile("C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_eye.xml"))
                        using (CvMemStorage storage = new CvMemStorage())
                        {
                            storage.Clear();

                            Stopwatch watch = Stopwatch.StartNew();
                            CvSeq<CvAvgComp> eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
                            watch.Stop();
                            Console.WriteLine("detection time = {0}msn", watch.ElapsedMilliseconds);

                            for (int i = 0; i < eyes.Total; i++)
                            {
                                CvRect r = eyes[i].Value.Rect;
                                CvPoint center = new CvPoint
                                {
                                    X = Cv.Round((r.X + r.Width * 0.5) * Scale),
                                    Y = Cv.Round((r.Y + r.Height * 0.5) * Scale)
                                };
                                int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
                                img.Circle(center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
                            }
                        }
                        
                        w.Image = img;
                    }
                }
            }
        }
    }
}

If you enjoyed this post, please consider making a donation.

Related Posts

Tagged with:  

67 Responses to OpenCV Eye Tracking with C#

  1. AmarjeetAlien says:

    Thanks a lot LuCuS.
    I’m developing an eye mouse using OpenCV and C#(with ofcourse OpenCvSharp).The eyeball movements are to be synced with cursor movements and blinking to clicks(like left eye blink is same as left click).
    I’m entirely new to all these packages (though have some theoretical knowledge of c#) and finding lots of difficulties to achieve it.
    So far I’ve designed a “mouse properties” window like GUI(only outline, don’t know how to add controls…working on that)and can capture the camera into an image box(here is the outline :http://img27.imageshack.us/i/imouse2.png/).
    But, don’t know how to smooth the video file(output to shown in a different image-box) and locate and track the eyeball and blinking at the same time.

    I’ve googling for qite sometime,there aren’t many active sites/blogs for “OpenCV + C# + OpenCvSharp”.Not all are active either.I luckily found yours and it has proved a great help, Thanks again.
    If this interests you then please do some tutorials on the same,but I’m in a hurry, sorry for that but I got only two weeks to submit the project(ya, that’s right It’s my last semester project).

    With this code I’m running into problem with “Stopwath”.

  2. AmarjeetAlien says:

    That worked…I forgot to include all “using” statements.

    • LuCuS says:

      If you’re using this for an actual application, I would remove the stopwatch stuff. That’s only there for debugging. If your app seems to be running a little slow (not in sync with real-time movement), you can always swap to using Parallel if you have more than 1 processor or cores. There’s also a lot of other little things in the eye tracking example that were only included for demonstration purposes and can be removed. Here is a watered down version of the same example, but is a lot smaller, cleaner, and faster. It uses parallel programming to make use of multi-core systems where as the example above is only designed to use 1 core. I’ve also removed the circles since they’re not really needed for this. Instead, since I already have access to a rectangle in the CvAvgComp (eye) object, I simply draw that same rectangle onto the IplImage. You’ll also notice that I moved the Haar Classifier outside of the while loop. This prevents the app from having to look & load that file during every iteration of the while loop. One last thing is that I removed all of the down sampling stuff. Although that is necessary for larger object tracking, it isn’t needed here since you’re only looking for a small portion of the video anyways. All of this drastically speeds up the app.

      const double ScaleFactor = 2.5;
      const int MinNeighbors = 2;

      using (CvCapture cap = CvCapture.FromCamera(1))
      using (CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile(“C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_eye.xml”))
      {
      while (true)
      {
      using (IplImage img = cap.QueryFrame())
      {
      using (CvMemStorage storage = new CvMemStorage())
      {
      storage.Clear();

      CvSeq eyes = Cv.HaarDetectObjects(img, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));

      foreach(CvAvgComp eye in eyes.AsParallel())
      {
      img.Rectangle(eye.Rect, CvColor.Red);
      }
      }

      Bitmap bm = BitmapConverter.ToBitmap(img);
      bm.SetResolution(pctCvWindow.Width, pctCvWindow.Height);
      pctCvWindow.Image = bm;
      }
      }
      }

      • LuCuS says:

        There are also a few other things you can do to speed things up. The first one is to move the new CvMemStorage object outside of the while loop. I should have caught this in the last comment, but I didn’t. Also, you can get rid of all of the “using” wrappers around each section of code. Depending on what C# programmer you ask, some will say it’s better to wrap everything like this and the others will say it’s not. In my experience, I’ve found that not wrapping everything can improve speed and performance in most cases. It’s also a good idea to set your initial IplImage & Bitmap to null after every iteration of the while loop. This will force the garbage collector to clean up the objects after every run which can speed things up a bit. So, here is the final variation of the example above.

        const double ScaleFactor = 2.5;
        const int MinNeighbors = 2;

        CvCapture cap = CvCapture.FromCamera(1);
        CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile(“C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_eye.xml”);
        CvMemStorage storage = new CvMemStorage();
        while (true)
        {
        IplImage img = cap.QueryFrame();
        CvSeq eyes = Cv.HaarDetectObjects(img, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));

        foreach (CvAvgComp eye in eyes.AsParallel())
        {
        img.Rectangle(eye.Rect, CvColor.Red);
        }

        Bitmap bm = BitmapConverter.ToBitmap(img);
        bm.SetResolution(pctCvWindow.Width, pctCvWindow.Height);
        pctCvWindow.Image = bm;

        storage.Clear();
        img = null;
        bm = null;
        }

  3. AmarjeetAlien says:

    Thanks for your quick reply(I’m late,time zone!).
    I made your previous code work( http://img851.imageshack.us/i/imouse3.png/) but the only problem is it is not real time(I’ll try including CvHaarClassifierCascade in while loop).

    The code in the reply is showing error for eyes.AsParallel()
    [Error 1 'OpenCvSharp.CvSeq' does not contain a definition for 'AsParallel' and the best extension method overload 'System.Linq.ParallelEnumerable.AsParallel(System.Collections.IEnumerable)' has some invalid arguments D:\visual studio 2010\Projects\OpenCvSharp\TrackEye2\TrackEye2\Form1.cs 32]
    and pctCvWindow[does not exist in current context]…am I missing something ..like some “using” statement.
    Thanks!

    • LuCuS says:

      pctCvWindow is a picture box located on my windows form. I’m not sure why CvSeq says it doesn’t have AsParallel. I’ll have to look into that one. Try clicking on Project > Properties… and checking to see what your Target framework is. If it’s “.NET Framework 4 Client Profile”, try changing it to just “.NET Framework 4″. Also, make sure that you are using .NET 4. Anything before that did not natively support parallel programming.

  4. AmarjeetAlien says:

    Hey!
    I just need to tell you that I’m capturing the camera for 1st image box-EyeCapture…
    private void PlayCam()
    {
    src = cap.QueryFrame();
    IplImage tmp = src.Clone();
    src1 = tmp.Clone();
    EyeImage.Image = tmp.ToBitmap();
    }
    here src and scr1 are global.
    I’ve madified your code like this:
    private void LocateEye(IplImage sendImg)
    {
    CvColor[] colors = new CvColor[]{
    new CvColor(0,0,255),
    new CvColor(0,128,255),
    new CvColor(0,255,255),
    new CvColor(0,255,0),
    new CvColor(255,128,0),
    new CvColor(255,255,0),
    new CvColor(255,0,0),
    new CvColor(255,0,255),
    };
    const double Scale = 1.25;
    const double ScaleFactor = 2.5;
    const int MinNeighbors = 2;
    IplImage img = sendImg;
    using (IplImage smallImg = new IplImage(new CvSize(Cv.Round(img.Width / Scale), Cv.Round(img.Height / Scale)), BitDepth.U8, 1))
    {
    using (IplImage gray = new IplImage(img.Size, BitDepth.U8, 1))
    {
    Cv.CvtColor(img, gray, ColorConversion.BgrToGray);
    Cv.Resize(gray, smallImg, Interpolation.Linear);
    Cv.EqualizeHist(smallImg, smallImg);
    }
    using (CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile(“D:\\OpenCV2.2\\data\\haarcascades\\haarcascade_eye.xml”))
    using (CvMemStorage storage = new CvMemStorage())
    {
    storage.Clear();
    Stopwatch watch = Stopwatch.StartNew();
    CvSeq eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
    watch.Stop();
    Console.WriteLine(“detection time = {0}msn”, watch.ElapsedMilliseconds);

    for (int i = 0; i < eyes.Total; i++)
    {
    CvRect r = eyes[i].Value.Rect;
    CvPoint center = new CvPoint
    {
    X = Cv.Round((r.X + r.Width * 0.45) * Scale),
    Y = Cv.Round((r.Y + r.Height * 0.45) * Scale)
    };
    int radius = Cv.Round((r.Width + r.Height) * 0.15 * Scale);
    img.Circle(center, radius, CvColor.Red, 1, LineType.AntiAlias, 0);
    }
    }
    PupilImage.Image = img.ToBitmap();
    }
    }

    LocateEye is called when "PowerUrEyes" button is clicked:
    private void Power_Click(object sender, EventArgs e)
    {
    if (Playing != false)//a bool sets to true when webcam starts
    LocateEye(src1);
    }
    To make it precise:I'm not capturing the camera for 2nd picturebox(yes, it is not a window).Video captured in 1st image box is used(supposed to be) as input(have to be real time) for 2nd picturebox.
    My Problem: 2nd box is not real-time and works only once and that I've to keep pressing "PowerUrEyes" button! :(
    Thanks!

  5. AmarjeetAlien says:

    If you have any good link which deals with “pupil center detection &or blink detection using OpenCV and C#” then please let me know.
    My video will basically contain eyes only so I can directly go for pupil detection.I’ve to pass the pupil center to mouse handler(another problematic topic) to perform mouse operation.

    I can mail you my project,but only if you ask,I know you must be busy and I don’t intend to trouble without your permission!
    Thanks!

    • LuCuS says:

      It’s no trouble. I’m here to help any way I can. I shouldn’t need your project though. I think I have an idea of what you are going for.

    • LuCuS says:

      A while back, I wrote an application that was kinda like this. It was an app that detected exactly where users were looking at their screen. I designed it for a marketing company that used it for a study to see where the “hotspots” were on their clients’ webpages. They then used a heat map to show the hotspots on each of those webpages which they then determined to be the most viewed areas of those webpages. With that information, they were able to tweak the pages for better conversions and revenues. The app was also capable of detecting how far the user was sitting from the camera which improved the accuracy of the app.

      Anyways, once the app knew how far the user was, it only watched the pupils and could determine where the user was looking. With a few modifications, it could also determine whether the user blinked and which eye they blinked with. I’m trying to see if I still have the source code for that app. If not, I’ll try to throw something together similar as I think that might be a good fit for what you’re trying to do.

      • AmarjeetAlien says:

        Well, That sounds great!
        I’ve implemented blink detection module separately, but it’s a console. I’ve trouble adding a new class and using its objects and functions outside of class.As you will notice in my project that I added “DetectEyes.cs” but not using it..instead pated the same code in “Form1.cs”. Actually I couldn’t access “PupilImage.Image” form here, which ofcourse I can’t at least in that simple way.

        Here it is: http://www.filedropper.com/projects

        I’ll also mail you using my yahoo,in case the above link doesn’t work…let me know, gmail sucks for *.exe files.
        Thank!

  6. abbid_siddiqui says:

    Hi Lucus
    Its indeed a wonderful article. I have a few questions, Please respond:
    1)If we have more than one webcams, can we enlist them using this library? If not? how is it possible?
    2)How can we control the CvWindow object if we want to raise events on the basis of some action like generate an event if the face count increases to 2?
    3)How can we set the distance of the default user from the camera?

    • LuCuS says:

      1) At line 29 of the code above, you will see the number 2 being passed as the parameter for the “FromCamera” method. “CvCapture.FromCamera(2)”. This number is the index of the camera as it is found on your computer. That means, if you have 2 cameras connected at the same time, you can change this number to switch between the 2 cameras. When I wrote this article, I had 2 cameras plugged in at the same time and the camera I chose to use was found at index 2.

      2) Just before line 55, you could add something like:
      if(eyes.Total > 1) …. // do something here

      3) To get the distance between the user and the camera, I use the Haar cascade for head tracking from another article you can find at http://www.prodigyproductionsllc.com/articles/programming/opencv-head-tracking-with-c/. Once you’ve located the head in the camera, you can use a simple formula that gets the size of the head and compares it to the size of the camera device to determine the distance from the camera. The average human adult head is about 17-18 centimeters wide and 21-22 centimeters long. I can’t think of the exact formula I use for this, but I’m sure you could do a quick Google search to find something that works.

      Back to question #2, you could combine the code from the eye tracking article and the head tracking article to watch for “extra” faces appearing in the camera. Let me know if you have any problems doing this. If you do, I’ll see what I can do to help you out.

  7. abbid_siddiqui says:

    One more question i forgot to ask:
    Can we control cam’s settings like brightness, contrast etc. with this library? how?

  8. abbid_siddiqui says:

    Sorry to disturb you again,i have applied your face detection and eye detection examples and they are running perfectly. One problem i am facing is that i am unable to find the code to stop the camera, when i stop my application, camera remains running. How to turn it off? I’ll also let you know the findings of your earlier suggestions to my answers.

    • LuCuS says:

      At line 32, you’ll see “while (CvWindow.WaitKey(10) < 0)". What this does is tells your CvWindow to listen for any key press. So, to close your window, you'll need to click on the window that shows your video feed and press any key. If you want to have a little more control over your windows and application, I wrote another article showing how to use OpenCV inside of a standard Windows form application. In that article, I show you how to add buttons to tell your camera to start or stop recording as well as a button to save a screenshot of your video feed to an image on your file system. You can find that are at http://www.prodigyproductionsllc.com/articles/programming/use-opencv-in-a-windows-form-application-in-c/. Let me know if this helps.

  9. abbid_siddiqui says:

    I am using the following code for eye detection, but unable to stop the camera in this code, its light remain on even after stopping:

    private Thread eyeThread;
    private void button3_Click(object sender, EventArgs e)
    {

    //enhanced eye detection system with less memory consumption
    CaptureCameraForEye();
    }

    private void CaptureCameraForEye()
    {
    eyeThread = new Thread(new ThreadStart(EyeDetectionCallback));
    eyeThread.Start();
    }

    private void EyeDetectionCallback()
    {
    CvColor[] colors = new CvColor[]{
    new CvColor(0,0,255),
    new CvColor(0,128,255),
    new CvColor(0,255,255),
    new CvColor(0,255,0),
    new CvColor(255,128,0),
    new CvColor(255,255,0),
    new CvColor(255,0,0),
    new CvColor(255,0,255),
    };

    const double Scale = 1.25;//increase for larger circles
    const double ScaleFactor = 2.5;
    const int MinNeighbors = 1; //minimum 2 objects (eyes)

    CvCapture cap = CvCapture.FromCamera(CaptureDevice.Any, 1);
    CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile(“E:\\OpenCVSharp\\OpenCvSharp Library\\OpenCvSharp-2.2-x86-20110509\\data\\haarcascades\\haarcascade_eye.xml”);
    CvMemStorage storage = new CvMemStorage();
    while (CvWindow.WaitKey(10) < 0)
    {
    IplImage img = cap.QueryFrame();
    IplImage smallImg = new IplImage(new CvSize(Cv.Round(img.Width / Scale), Cv.Round(img.Height / Scale)), BitDepth.U8, 1);

    IplImage gray = new IplImage(img.Size, BitDepth.U8, 1);

    Cv.CvtColor(img, gray, ColorConversion.BgrToGray);
    Cv.Resize(gray, smallImg, Interpolation.Linear);
    Cv.EqualizeHist(smallImg, smallImg);

    IplImage outImage = Cv.Clone(img);

    CvSeq eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));

    for (int i = 0; i < eyes.Total; i++)
    {
    CvRect r = eyes[i].Value.Rect;
    CvPoint center = new CvPoint
    {
    X = Cv.Round((r.X + r.Width * 0.5) * Scale),
    Y = Cv.Round((r.Y + r.Height * 0.5) * Scale)
    };
    int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
    img.Circle(center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
    }

    //foreach (CvAvgComp eye in eyes.AsParallel())
    //{
    // img.Rectangle(eye.Rect, CvColor.Red);
    //}

    Bitmap bmp = BitmapConverter.ToBitmap(img);
    bmp.SetResolution(pictureBox1.Width, pictureBox1.Height);
    pictureBox1.Image = bmp;

    storage.Clear();
    //img = null;
    //smallImg = null;
    //gray = null;
    //bmp = null;
    }

    }
    private void btnStop_Click(object sender, EventArgs e)
    {
    if (eyeThread != null && eyeThread.IsAlive)
    eyeThread.Abort();

    pictureBox1.Image = null;
    }

    • LuCuS says:

      Try setting “CvCapture cap;” as a global variable. Then, in your stop-click handler, try calling cap.Dispose(); I haven’t ran into this problem. Every time I close my app, the light on my camera turns off and it stops recording. If that doesn’t work, I’ll try to recreate it when I get home tonight and find a solution for you.

      • abbid_siddiqui says:

        yeah it works, thanks. However i am working on its enhancements, will let you know about my future queries, if any.

          • abbid_siddiqui says:

            Hi Lucus
            I am facing a problem in the above mentioned code. the problem is that after 3-4 mins, the application starts getting slow and after sometime, it hangs due to the memory overflow. What i have found is that it is not destroying the objects as quickly. Please suggest how to free up the memory in order to run it smoothly

          • LuCuS says:

            A lot of the stuff in the code above can be rearranged to get better performance. For example, you can strip out any of the stopwatch stuff as that’s only there for testing and debugging. Make sure you remove the Console.WriteLine stuff at line 53. Any time you do a Console write, your app will run a little slower. Also, move lines 45 and 46 to before the while-loop at line 32. These are singletons and don’t need to be recreated during every iteration of the while-loop. You can also try changing the variables for Scale and ScaleFactor at line 25. Changing those can also alter your performance. Those changes should drastically speed up your performance. If that doesn’t speed up your app, let me know and I’ll see if I can find a few more tweaks for you to try out.

          • abbid_siddiqui says:

            Hi
            I have found another issue, while using the code, sometimes it draws 2 circles on the face instead of one (one around your eyes and chin area, second around your complete head and face). I have checked in the code that at that time, it shows the face count as 2 instead of one. Please suggest

          • LuCuS says:

            Are you using the face / head tracking application or are you using the eye tracking application? If you’re using the head tracking application from this article, you can change line 57 to stop tracking after the first face is located. To do that, change this line:

            for (int i = 0; i < faces.Total; i++)

            to this:

            for (int i = 0; i < 1; i++)

            However, by doing this, your application will only be able to track 1 person at a time. Also, make sure you have good lighting in the area you’re capturing your video in. The Haar classifier for head tracking will sometimes detect just the face portion and then the entire head and then back to the face again if the camera has insufficient lighting.

          • abbid_siddiqui says:

            if there is an option to attach, let me know, i can post you the captured image for your clarity.

          • abbid_siddiqui says:

            Lucus
            My requirement is that my application must be able to detect more than 1 face. In that case, this code won’t work. What do you suggest in order to avoid this problem? i have another trial version of an application but it is detecting it properly.
            http://www.oculislabs.com/
            Kindly suggest

          • LuCuS says:

            One thing you can do is if you have more than 1 face detected on your screen, get the coordinates for both and see if they overlap. If they overlap by more than about 15%, then chances are they are false positives and one of them can be ignored. You can do that inside the for-loop that gives you a rectangle for where the face was located.

            If that doesn’t work, I have a program I wrote that does a better job at face detection that does not use OpenCV and it’s a lot faster. It’s a program that uses pure C# and no 3rd party libraries. However, the only thing that program is good for is tracking faces. If you need to track anything else (eyes, hands, etc…), that app won’t work in its’ current form. I’ve been planning to extend its’ functionality, but haven’t had the time.

          • abbid_siddiqui says:

            Yes, currently i only need face tracking, not the eyes or hands. If you can please share that code, if its not confidential.

          • LuCuS says:

            I’ll dig it up when I get home tonight and will send you a link to download it.

          • abbid_siddiqui says:

            Hi Lucus
            Sorry to disturb you again. The face detection circle is not constant, it continuously remains blinking (sometimes it shows facecount as 0 i.e. disappears, 1 and then 2). So what criteria should i implement to detect an intruder (i mean if we assume that if it detects an extra circle for let’s say 5 seconds), then it should recognize it as intruder and perform the required action.

          • LuCuS says:

            Unfortunately this tool isn’t perfect. You have to remember that OpenCV isn’t watching the overall continuous video feed. Instead, it has to look at every individual image frame by frame. Sometimes it can get false positives and sometimes the processor can be bogged down, causing the framework to not have enough time to process some frames, therefore it doesn’t display the circle all the time nor does it show a constant circle. Also, depending on the movement of the person in the video, the lighting, quality of the camera, the processor speed, how much memory is in the computer, etc… the framework will respond with different behavior. Also, depending on how far the person is from the camera can play a big part in its’ performance.

            One thing you could consider doing is wrapping your circle with some kind of timer that causes the app to continue displaying the circle for N number of millisecond even after OpenCV no longer recognizes any faces. You would have to check that a face has not already been detected in the same area as the existing circle before you drew another circle. You could check if OpenCV has already detected a face in that area by checking for overlapping circles or by getting the center points of the current circle and the newly detected face and see if they’re within a certain percentage / distance from each other.

            Can you share with me the purpose of your project? Maybe I can recommend a better solution. Although I think OpenCV is the best “overall” tool, I do know there are some other tools that are a little better at specific tasks. For example, there are some frameworks that make better use of the GPU instead of fully relying on the CPU to handle all of the processing.

          • LuCuS says:

            Here is a modified version of the head tracker app that does a pretty decent job running on one of my older laptops. It’s very fast and does a great job of keeping my face circled at all times even when I move around and even turn my head. Let me know if this helps.


            using System;
            using System.Collections.Generic;
            using System.Diagnostics;
            using System.Windows.Forms;
            using System.Runtime.InteropServices;
            using OpenCvSharp;

            namespace FaceDetect
            {
            class FaceDetect
            {
            public FaceDetect()
            {
            const double Scale = 2.0;
            const double ScaleFactor = 2.5;
            const int MinNeighbors = 1;

            CvCapture cap = CvCapture.FromCamera(2);
            CvWindow w = new CvWindow("Face Tracker");
            CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile("C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_frontalface_alt2.xml");
            CvMemStorage storage = new CvMemStorage();

            while (CvWindow.WaitKey(10) < 0)
            {
            IplImage img = cap.QueryFrame();
            IplImage smallImg = new IplImage(new CvSize(Cv.Round(img.Width / Scale), Cv.Round(img.Height / Scale)), BitDepth.U8, 1);

            using (IplImage gray = new IplImage(img.Size, BitDepth.U8, 1))
            {
            Cv.CvtColor(img, gray, ColorConversion.BgrToGray);
            Cv.Resize(gray, smallImg, Interpolation.Linear);
            Cv.EqualizeHist(smallImg, smallImg);
            }

            storage.Clear();
            CvSeq faces = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));

            for (int i = 0; i < faces.Total; i++)
            {
            CvRect r = faces[i].Value.Rect;
            CvPoint center = new CvPoint
            {
            X = Cv.Round((r.X + r.Width * 0.5) * Scale),
            Y = Cv.Round((r.Y + r.Height * 0.5) * Scale)
            };
            int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
            img.Circle(center, radius, CvColor.Red, 3, LineType.AntiAlias, 0);
            }

            w.Image = img;
            }
            }
            }
            }

          • abbid_siddiqui says:

            Thanks Lucus, so far i am able to manage the memory in my application. You asked for the nature of project. It is based on face detection ofcourse, whenever an extra face is detected, it will perform different actions.
            That is the main scenario of the application so far
            Regards

          • LuCuS says:

            Cool. Another framework you might want to consider is AForge (http://www.aforgenet.com/). I’ve done a lot of work with AForge and found it to be a nice framework as well. It’s very fast! I’ve used it for tracking people in a crowded area. Once my app “sees” a person, it will give them an index and follow them the entire time they’re within view of the camera. I’ve had it successfully track more than 30 people at the same time without any performance problems.

            There’s also OpenCVdotNET (http://code.google.com/p/opencvdotnet/). It’s kinda like OpenCVSharp. It too is just a wrapper for the OpenCV framework. I haven’t messed with it much, but I’ve read good things about it.

            Another good .NET wrapper for OpenCV is Emgu (http://www.emgu.com/wiki/index.php/Main_Page). I’ve played around with Emgu a little bit. But always found myself going back to OpenCVSharp. On the Emgu website, you’ll find a comparison chart of Emgu and other OpenCV wrappers including OpenCVSharp and OpenCVdotNET.

          • abbid_siddiqui says:

            Lucus can we use Head as well as Eye tracking at the same time? Can you please suggest?

          • LuCuS says:

            Yes. You can mix and match any of the trackers by adding in the Haar classifiers for each cascade you need. Then just iterate your detected objects by passing each cascade like this.


            CvHaarClassifierCascade faceCascade = CvHaarClassifierCascade.FromFile("haarcascade_frontalface_alt2.xml");
            CvHaarClassifierCascade eyeCascade = CvHaarClassifierCascade.FromFile("haarcascade_eye.xml");
            ...
            CvSeq faces = Cv.HaarDetectObjects(smallImg, faceCascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
            for (int i = 0; i < faces.Total; i++) ...
            ...
            CvSeq eyes = Cv.HaarDetectObjects(smallImg, eyeCascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
            for (int i = 0; i < eyes.Total; i++) ...

            You can also eliminate duplicate code by creating a single method to detect the objects and draw the circles accordingly like this:


            using System;
            using System.Collections.Generic;
            using System.Diagnostics;
            using System.Windows.Forms;
            using System.Runtime.InteropServices;
            using OpenCvSharp;

            namespace Vision
            {
            class MultipleDetection
            {
            private const double Scale = 2.0;
            private const double ScaleFactor = 2.5;
            private const int MinNeighbors = 1;
            private CvMemStorage storage;

            public MultipleDetection()
            {
            CvCapture cap = CvCapture.FromCamera(2);
            CvWindow w = new CvWindow("Multiple Object Tracker");
            CvHaarClassifierCascade faceCascade = CvHaarClassifierCascade.FromFile("C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_frontalface_alt2.xml");
            CvHaarClassifierCascade eyeCascade = CvHaarClassifierCascade.FromFile("C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_frontalface_alt2.xml");
            storage = new CvMemStorage();

            while (CvWindow.WaitKey(10) < 0)
            {
            IplImage img = cap.QueryFrame();

            DetectObjects(ref img, faceCascade);
            DetectObjects(ref img, eyeCascade);

            w.Image = img;
            }
            }

            private void DetectObjects(ref IplImage img, CvHaarClassifierCascade cascade)
            {
            storage.Clear();
            IplImage smallImg = new IplImage(new CvSize(Cv.Round(img.Width / Scale), Cv.Round(img.Height / Scale)), BitDepth.U8, 1);

            using (IplImage gray = new IplImage(img.Size, BitDepth.U8, 1))
            {
            Cv.CvtColor(img, gray, ColorConversion.BgrToGray);
            Cv.Resize(gray, smallImg, Interpolation.Linear);
            Cv.EqualizeHist(smallImg, smallImg);
            }

            CvSeq objects = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));

            for (int i = 0; i < objects.Total; i++)
            {
            CvRect r = objects[i].Value.Rect;
            CvPoint center = new CvPoint
            {
            X = Cv.Round((r.X + r.Width * 0.5) * Scale),
            Y = Cv.Round((r.Y + r.Height * 0.5) * Scale)
            };
            int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
            img.Circle(center, radius, CvColor.Red, 3, LineType.AntiAlias, 0);
            }
            }
            }
            }

          • abbid_siddiqui says:

            Hi Lucus
            First of all, thanks again for the code you sent. One more thing i would like to ask. I am trying to send a frame captured from DirectShow (in bitmap) to send it to OpenCV for Face Detection. I am also using a wrapper of DirectShow in C# for this. But somehow its not displaying any image in the picturebox…I am pasting my code here:

            public void OnImageCaptured(Touchless.Vision.Contracts.IFrameSource frameSource, Touchless.Vision.Contracts.Frame frame, double fps)
            {
            _latestFrame = frame.Image;
            pictureBoxDisplay.Invalidate();
            SendFrameToOpenCV( _latestFrame);

            }
            private void SendFrameToOpenCV(Bitmap frame)
            {

            IplImage img = new IplImage(new CvSize(frame.Width, frame.Height), BitDepth.U8, 3);

            //here i get the IplImage from bitmap…the rest of the code to process is same as that of OpenCV, but the problem is somehow its only displaying the black screen in the picture box. No refreshing etc.
            Please suggest

          • LuCuS says:

            In your SendFrameToOpenCV method, instead of creating a new IplImage, you need to build “img” using IplImage.FromBitmap like this:

            private void SendFrameToOpenCV(Bitmap frame)
            {
            IplImage img = IplImage.FromBitmap(frame);

            What you’re currently doing now is building a new IplImage with the same parameters as your incoming frame, but you’re not getting the contents of that Bitmap into your new IplImage. The code above should fix that for you.

          • abbid_siddiqui says:

            i have tried this line but it says the following error:
            “The method or operation is not implemented.”

          • LuCuS says:

            Go to your Project Properties (accessible from the Project menu item). Then click on Application and see what Target Framework is set to. If it is something like .NET Framework 4.0 Client Profile, change it to just .NET Framework 4.0 and try the code again.

          • abbid_siddiqui says:

            I am working on VS2008 and my target framework is set to 3.5. But its still giving the same error of Method Not implemented exception

          • LuCuS says:

            Ok. Then try using this:

            private void SendFrameToOpenCV(Bitmap frame)
            {
            IplImage img = new IplImage(new CvSize(frame.Width, frame.Height), BitDepth.U8, 3);
            img.CopyFrom(frame);

          • abbid_siddiqui says:

            Yes it works, but i am still unable to display the image in my picturebox. Sending you the code.
            Kindly review. I have omitted the loop in SendFrameToOpenCV method..also check that at the end.
            private void frmMain_Load(object sender, EventArgs e)
            {
            if (!DesignMode)
            {

            // Refresh the list of available cameras
            cboCameras.Items.Clear();
            foreach (Camera cam in CameraService.AvailableCameras)
            cboCameras.Items.Add(cam);

            if (cboCameras.Items.Count > 0)
            cboCameras.SelectedIndex = 0;

            }

            }
            //for stopping the camera
            private void thrashOldCamera()
            {
            // Trash the old camera
            if (_frameSource != null)
            {
            _frameSource.NewFrame -= OnImageCaptured;
            _frameSource.Camera.Dispose();
            setFrameSource(null);
            pictureBoxDisplay.Paint -= new PaintEventHandler(drawLatestImage);
            }
            }
            private void setFrameSource(CameraFrameSource cameraFrameSource)
            {
            if (_frameSource == cameraFrameSource)
            return;

            _frameSource = cameraFrameSource;
            }

            //send this image to the OpenCV library in order to capture the input
            public void OnImageCaptured(Touchless.Vision.Contracts.IFrameSource frameSource, Touchless.Vision.Contracts.Frame frame, double fps)
            {
            _latestFrame = frame.Image;
            pictureBoxDisplay.Invalidate();

            //here i am sending the frame to my method
            SendFrameToOpenCV(_latestFrame);

            }

            private void btnStart_Click(object sender, EventArgs e)
            {
            // Early return if we’ve selected the current camera
            if (_frameSource != null && _frameSource.Camera == cboCameras.SelectedItem)
            return;

            thrashOldCamera();
            startCapturing();
            }
            private void startCapturing()
            {
            try
            {
            Camera c = (Camera)cboCameras.SelectedItem;
            setFrameSource(new CameraFrameSource(c));
            _frameSource.Camera.CaptureWidth = 320;
            _frameSource.Camera.CaptureHeight = 240;

            _frameSource.Camera.Fps = 20;
            _frameSource.NewFrame += OnImageCaptured;

            pictureBoxDisplay.Paint += new PaintEventHandler(drawLatestImage);
            _frameSource.StartFrameCapture();
            }
            catch (Exception ex)
            {
            cboCameras.Text = “Select A Camera”;
            MessageBox.Show(ex.Message);
            }
            }

            private void drawLatestImage(object sender, PaintEventArgs e)
            {
            if (_latestFrame != null)
            {
            // Draw the latest image from the active camera
            e.Graphics.DrawImage(_latestFrame, 0, 0, _latestFrame.Width, _latestFrame.Height);

            }
            }
            private void btnStop_Click(object sender, EventArgs e)
            {
            thrashOldCamera();
            timer1.Stop();
            timer1.Enabled = false;
            }

            //Open CV method
            private void SendFrameToOpenCV(Bitmap frame)
            {
            //OpenCv Code
            //Utility.img = faceCap.QueryFrame();

            Utility.img = new IplImage(new CvSize(frame.Width, frame.Height), BitDepth.U8, 3);
            Utility.img.CopyFrom(frame);

            string path = “Resources\\haarcascades\\haarcascade_frontalface_alt2.xml”;
            Utility.FaceCascade = CvHaarClassifierCascade.FromFile(path);
            //Utility.EyeCascade = CvHaarClassifierCascade.FromFile(“Resources\\haarcascades\\haarcascade_eye.xml”); //for using 2 classifiers at the same moment

            DetectObjects(ref Utility.img, Utility.FaceCascade);
            //DetectObjects(ref Utility.img, Utility.EyeCascade); //for using 2 classifiers at the same moment

            Utility.bmp = BitmapConverter.ToBitmap(Utility.img);
            Utility.bmp.SetResolution(pictureBox2.Width, pictureBox2.Height);
            pictureBox2.Image = Utility.bmp;

            Utility.storage.Clear();

            }
            private void SendFrameToOpenCV(Bitmap frame)
            {
            //OpenCv Code
            //Utility.img = faceCap.QueryFrame();

            Utility.img = new IplImage(new CvSize(frame.Width, frame.Height), BitDepth.U8, 3);
            Utility.img.CopyFrom(frame);

            string path = “Resources\\haarcascades\\haarcascade_frontalface_alt2.xml”;
            Utility.FaceCascade = CvHaarClassifierCascade.FromFile(path);
            //Utility.EyeCascade = CvHaarClassifierCascade.FromFile(“Resources\\haarcascades\\haarcascade_eye.xml”); //for using 2 classifiers at the same moment

            DetectObjects(ref Utility.img, Utility.FaceCascade);
            //DetectObjects(ref Utility.img, Utility.EyeCascade); //for using 2 classifiers at the same moment

            Utility.bmp = BitmapConverter.ToBitmap(Utility.img);
            Utility.bmp.SetResolution(pictureBox2.Width, pictureBox2.Height);
            pictureBox2.Image = Utility.bmp;

            Utility.storage.Clear();

            }

          • abbid_siddiqui says:

            Hi Lucus
            Did you get any idea of what i was saying? I got another solution for getting the frame but it didn’t work too:

            private void SendFrameToOpenCV_N(Bitmap frame)
            {
            Utility.img = new IplImage(new CvSize(_imageInfo.Width, _imageInfo.Height), BitDepth.U8, 1);

            Utility.img.CopyFrom(frame);

            string path = “Resources\\haarcascades\\haarcascade_frontalface_alt2.xml”;
            Utility.FaceCascade = CvHaarClassifierCascade.FromFile(path);
            //Utility.EyeCascade = CvHaarClassifierCascade.FromFile(“Resources\\haarcascades\\haarcascade_eye.xml”); //for using 2 classifiers at the same moment

            DetectObjects(ref Utility.img, Utility.FaceCascade);
            //DetectObjects(ref Utility.img, Utility.EyeCascade); //for using 2 classifiers at the same moment

            Utility.bmp = BitmapConverter.ToBitmap(Utility.img);
            Utility.bmp.SetResolution(pictureBox2.Width, pictureBox2.Height);
            pictureBox2.Image = Utility.bmp;

            Utility.storage.Clear();

            }

          • LuCuS says:

            Sorry. I forgot to take a look at it. Can you send me your entire project zipped up? It’ll be easier to work out a solution if I have all of the same components that you do.

          • abbid_siddiqui says:

            Unfortunately, my net doesn’t support sending such a huge project, however i will try to remember the site from where i downloaded the sample and extended. Till then, can you suggest something?

          • LuCuS says:

            A couple of things to check:

            1) Have you checked in debug that Bitmap frame (the incoming parameter) is populated properly?
            2) Utility.img = new IplImage ….. is created using _imageInfo.Width and Height. Where does _imageInfo get populated? Are you sure that the Width and Height are correct? Maybe try getting Width and Height from “frame” instead of _imageInfo.
            3) Comment out the DetectObject call so that the image is passed directly to the BitmapConverter without being touched.

          • abbid_siddiqui says:

            Sorry i was mistakenly doing that, I have done all these 3 steps, but the behaviour is still the same. It only shows black color in the picture box, meaning that it is only capturing the first frame only..not the other ones. _imageInfo is the variable declared on top of the form to capture image from the frame.

          • LuCuS says:

            Try commenting out all of your code in that function and going straight from the incoming Bitmap “frame” to pictureBox2.

            private void SendFrameToOpenCV_N(Bitmap frame)
            {
            pictureBox2.Image = frame;
            }

            If that does not work, that will tell us that the problem is not inside this function and we can look at the other functions that come before this one.

  10. AmarjeetAlien says:

    Hey LuCuS!
    I too have a couple of questions:
    1. How to get/set the id of detected objects?
    It is detecting both eyes and what I wanna do is tag them as “Leye” and “Reye” and use center of Reye as cursor position.

    2. How to stop detection when when it has detected a pair of eyes?
    If someone other than the user comes in camera view field it should not detect his/her eyes.

    Thanks!

    • LuCuS says:

      Sorry for the delayed response. I’ve been wrapped up with a new startup for the last couple of months. So, I haven’t had time to reply to all comments and emails as they come in.

      To answer your first question, you can check the positions of both eyes and the eye that has an X value that is less than the X value of the other would be your right eye. You can then flag that as being Reye.

      For your second question, the only thing I can think of right off would be to change the for-loop at line 55 above to read “for (int i = 0; i < 2; i++)". This would tell the app to only report on the first 2 eyes the app has detected.

      By the way, before I began working on this new startup, I had started on a C# app that you can use to easily train a Haar classifier. I was going to use it for training a classifier to track hand gestures. You could easily use it for training a pupil classifier. As soon as I get time to finish it up, I’ll send you an email with a link to download it for use with your project. I’ll also add a link to it somewhere on this site for others to use as well.

  11. AmarjeetAlien says:

    I know about the tornadoes and the problems it has caused to you and your work. Meanwhile today is the final presentation of my project. But I’ll keep working on this even after my college life.
    And since you have started to make the haar classifier I think you should take look at this guy’s project:http://info.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html
    He has done nice job for the same but using metlab. It’s a real time classifier training and it learns over time and becomes more robust.You don’t need an explicit training session for this.

    I really didn’t like HaarTraining for there is no scope of improvement once the training is done. My classifier is not that robust and its creating a lot of trouble for me and I can’t go for another 6 day session of haatraining.

    Will let you how was my “final presentation”!
    Thanks LuCuS!

    • LuCuS says:

      The way my classifier works is you use your webcam and OpenCV to record videos for your positive and negative images. For example, in my hand gesture classifier, I recorded a video of me making different hand gestures in front of the camera (thumbs up, thumbs down, etc….). I only recorded about 60 seconds of video since 60 seconds at 30 fps = 1800 frames. The program then extracted each individual frame as an image on my file system. After that, the program trains itself using those exported images to build the classifier. It does take a while (several days) to train an entire classifier on my laptop. But, I have my own private cloud that I’ll be using to speed up that process. If everything works as planned, I’ve even considered creating a site that allows anyone to upload videos and / or images to my cloud to do the training for them in a shorter amount of time. I want my training program to be a simple 1-2-3 step process that is both user friendly and fast.

  12. AmarjeetAlien says:

    …..^^I generally miss one and two letter words in my writings!! May be some kind of writing disorder!! lol :P

    Anyway, it’s over for “college project” sake! The examiner was happy but not much for I didn’t implement clicking part…only the cursor were moving in sync with pupil motion….and that also it didn’t cover the entire screen, but he didn’t notice that. It was a 200 marks project and has already got 94/100 in 1st part and hopefully will get >90 in 2nd part as well….all becoz of your support.

    Thanks a lot! Keep yours good work going.

    • LuCuS says:

      AWESOME! Glad to hear it. I know you’ll keep improving your system. I can’t wait until you have it ready as a commercial application. When you do, I’d like to be your first buyer.

      • AmarjeetAlien says:

        Sure!
        But I’ll make sure it is available for free to disables and as since it has huge potentials in gaming so normal gamers will have to pay a little!

        And you don’t have to buy…I’ll ePost you!

        Thanks!

        • LuCuS says:

          I could definitely see that as being huge in the gaming industry! I’ve done some messing around with the Kinect & C# and I believe that motion tracking is definitely the future of gaming.

        • LuCuS says:

          Something else you could consider doing with your app as it stands today is to offer up a cheap solution for webpage eye tracking analytics. I created a system like this for a marketing firm a few years back. It works by having several users sit down at a workstation and navigate thru a website. The system records all of the “hot spots” on the screen (what areas were viewed the most, how participants used the navigation, etc…). These hot spots are then displayed using a heat map to indicate where users look the most at each of the pages on the website. The marketing firm then uses this information to tweak the websites until all of the “important” stuff is viewed the most. For the majority of the website owners in the tests, they wanted participants to look and click more on ads than anything else. Knowing exactly where users look at your web page is worth a lot of money. You could easily modify your system to do this same thing and offer it up as an inexpensive alternative to some of those high-cost analytics labs that are out there today.

  13. nandita429@gmail.com says:

    can i do this with Emgucv instead of opencv? after tracking the eye can dat movement of eye be usd as mouse pointer? reply soon please

  14. janani says:

    Hi,

    How to stop detection when when it has detected a pair of eyes? when i modify the code as ‘for (int i = 0; i < 2; i++) ' and when i run it CvRect r = eyes[i].Value.Rect; this line throws InvalidOperationException .

  15. dommy says:

    Hi,

    good article – thanks. for motion changes how can i eliminate or minimize changes to the light conditions? for instance, a room that is recorded at night and then again when the sun rises and light comes throught the window and illuminates the room.

    Best Regards..

Leave a Reply