Ever since receiving my new Google Glass a few weeks ago, my inbox has been flooded from people asking if I have built any computer vision apps yet for Glass. The answer to those questions is “yes” and I plan on posting some articles and videos about them very soon. Until then, I thought it would be a good idea to post some articles showing how to get started with computer vision development on Google Glass using OpenCV for Android. Since there are several steps involved with building useful computer vision apps for Glass, I will be breaking up the steps involved into multiple articles. In this article, I will walk you thru the steps required to get your first OpenCV app installed and running on your Glass. While I’m at it, I will also show you how to install and run a simple face detection app. In a future article, I will show you how to do more such as applying filters and even a little bit of augmented reality.
I’ve been doing a lot of work with the Raspberry Pi lately with what little free time I’ve had and one of the things I’ve been working with the most is computer vision. Since computer vision is my passion and the Raspberry Pi has so much potential, I wanted to push it to its limits by seeing if I can run some of the same computer vision apps on my Raspberry Pi as I do on my full size laptop. Most of the computer vision applications I’ve worked on recently are all written in C++ and consist of proprietary code. However, I still love using other computer vision libraries and OpenCV is still at the top of that list. Even though I sometimes write OpenCV applications in C++ or Java which I can also do on the Raspberry Pi, I really like the fact that the Raspberry Pi is configured with Python right out-of-the-box. So, I want to take a few minutes to show all of you how to configure your RPi to work with OpenCV using Python. Let’s begin.
Recently, I came across a pretty cool little device called the “Cronus controller adapter“. Basically, it’s a USB dongle that can be plugged into your Xbox 360 or Playstation 3, allowing you to use any controller you want no matter what console it was originally designed for as long as it works over USB. For example, using the Cronus adapter, you can use your Xbox 360 controller on your Playstation 3, your Playstation 3 controller on your Xbox 360, or your mouse and keyboard on both. Since I (used to) do a lot of computer vision programming (and have been looking for an excuse to get back into it), I felt like this little device would be a great way for me to create some new computer vision applications. Since I’ve already written several computer vision apps that can detect and track objects, I would like to test my skills at automating some video games by using OpenCV for the processing. Since the Cronus controller adapter allows you to feed commands to your Xbox 360 and Playstation 3 from basically any other device, I think that the Cronus adapter will be a great way for me to send commands to my Xbox 360 based on objects detected and tracked by OpenCV using my computer.
It’s been a while since I’ve worked on any computer vision applications. So, tonight I decided to spend a few minutes to play around and have some fun with OpenCV and C#. I dug up an old augmented reality app I created a while back and threw in some 3D models I found on the web. The code isn’t ready to be shared, but I still thought the test results are pretty cool so far and thought I would share those results with all of you. As soon as I get the code to a stable point, I will post it here for all of you to play with. Until then, checkout my other OpenCV articles or head over to my official Computer Vision website at http://www.learncomputervision.com.
It’s easy to recognize when a semester is drawing near. Every computer science college student on the planet begins scouring the internet, looking for projects they can call their own and submit as their senior projects. Personally, I wish everyone would develop their own / new projects as that’s how we get many of the amazing products we all come to love and rely on. But, I also know that many computer scientists need a platform to build on top of. Besides, as a great man once said, “we’re all standing on the shoulders of giants” – Isaac Newton. Since my website’s purpose is to educate and to give others the building blocks for developing their own products, you can imagine how hammered my web servers get when semester-ends get closer and closer.
Over the last few weeks, I have received hundreds of emails asking for source code to many of my computer vision projects. The most commonly requested project this semester is my lane detection application. A while back I had a harddrive crash and unfortunately did not have a backup, causing me to lose the source code for my original lane detection application. I also haven’t had time (or a reason) to rewrite the application. However, with the boom of excitement about products such as Google Glasses and the Vuzix Smart Glasses, I have decided to rewrite my lane detection app which I would like to use with the Google Glasses, Vuzix Smart Glasses, or modified Vuzix Wrap 1200 video glasses that I have mounted a camera onto. The code is by far no where close to being complete. But, I do think it is in a good place that I can share it. Plus, as already mentioned, the state of the code at this point is only the stepping stones for others to build on top of. I might decide to release the final source code once I have it completed, but I haven’t really thought that far ahead yet. Until then, here is the code as it is today.