Recently, I came across a pretty cool little device called the “Cronus controller adapter“. Basically, it’s a USB dongle that can be plugged into your Xbox 360 or Playstation 3, allowing you to use any controller you want no matter what console it was originally designed for as long as it works over USB. For example, using the Cronus adapter, you can use your Xbox 360 controller on your Playstation 3, your Playstation 3 controller on your Xbox 360, or your mouse and keyboard on both. Since I (used to) do a lot of computer vision programming (and have been looking for an excuse to get back into it), I felt like this little device would be a great way for me to create some new computer vision applications. Since I’ve already written several computer vision apps that can detect and track objects, I would like to test my skills at automating some video games by using OpenCV for the processing. Since the Cronus controller adapter allows you to feed commands to your Xbox 360 and Playstation 3 from basically any other device, I think that the Cronus adapter will be a great way for me to send commands to my Xbox 360 based on objects detected and tracked by OpenCV using my computer.
It’s been a while since I’ve worked on any computer vision applications. So, tonight I decided to spend a few minutes to play around and have some fun with OpenCV and C#. I dug up an old augmented reality app I created a while back and threw in some 3D models I found on the web. The code isn’t ready to be shared, but I still thought the test results are pretty cool so far and thought I would share those results with all of you. As soon as I get the code to a stable point, I will post it here for all of you to play with. Until then, checkout my other OpenCV articles or head over to my official Computer Vision website at http://www.learncomputervision.com.
It’s easy to recognize when a semester is drawing near. Every computer science college student on the planet begins scouring the internet, looking for projects they can call their own and submit as their senior projects. Personally, I wish everyone would develop their own / new projects as that’s how we get many of the amazing products we all come to love and rely on. But, I also know that many computer scientists need a platform to build on top of. Besides, as a great man once said, “we’re all standing on the shoulders of giants” – Isaac Newton. Since my website’s purpose is to educate and to give others the building blocks for developing their own products, you can imagine how hammered my web servers get when semester-ends get closer and closer.
Over the last few weeks, I have received hundreds of emails asking for source code to many of my computer vision projects. The most commonly requested project this semester is my lane detection application. A while back I had a harddrive crash and unfortunately did not have a backup, causing me to lose the source code for my original lane detection application. I also haven’t had time (or a reason) to rewrite the application. However, with the boom of excitement about products such as Google Glasses and the Vuzix Smart Glasses, I have decided to rewrite my lane detection app which I would like to use with the Google Glasses, Vuzix Smart Glasses, or modified Vuzix Wrap 1200 video glasses that I have mounted a camera onto. The code is by far no where close to being complete. But, I do think it is in a good place that I can share it. Plus, as already mentioned, the state of the code at this point is only the stepping stones for others to build on top of. I might decide to release the final source code once I have it completed, but I haven’t really thought that far ahead yet. Until then, here is the code as it is today.
Every day for the past week, I taken you on a journey to teach you everything you need to know about creating your own Android apps. The first day was spent teaching you how to download & install the Java Development Kit (JDK), Eclipse, and the Android SDK. The second day was spent teaching you how to configure Eclipse to work with the Android SDK and how to create your first Android app. Day three was spent teaching you how to create a layout and how to add code to make your app functional. The fourth day was spent teaching you how to test and debug your app with the Android emulator. Day five was spent teaching you how to monetize your app by placing AdMob ads in it. On day six, I taught you how to run your app on actual Android devices including cellphones and tablets. Today, I am going to teach you how to publish your new Android app to the Google Play store so that others can enjoy your hard work and so that you can make some money from it. So, let’s jump right in.
In today’s article, Part 6 of the multi-part series, I will teach you how to run your Android apps on real devices. In part 4, I taught you how to test your application in the Android emulator. But, seeing your app running in the emulator is not near as rewarding as seeing your app running on an actual cellphone or tablet. Besides, before sharing your app with the rest of the world to enjoy, it’s always best to test your app on a device yourself. So, for that, I will be teaching you how to debug your app on your device using a USB cable and I will also be teaching you how to sign your application and deploy it to an actual device so that you can use it just like any other app on your device. So, let’s begin.