Announcement

Collapse
No announcement yet.

Color sensor

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    If you are not experience in dealing with frame capturing, and analyzing a Mat (the structure of a frame), you may be better off using a color sensor. If you want to try something new with the camera (I love trying new things), you can take a look at our sample code in playing with OpenCV.
    https://github.com/trc492/FtcSamples...estOpenCv.java
    This sample does face detection, not blob detection but it will give you a little taste on how it captures a frame, and enumerating the pixels on a Mat around the detected face and add a mustache to your face. For beacon color detection, you don't even need to use OpenCV because if you are right in front of the beacon, then you know the approx. area of the left side and the right side on the Mat. You just need to enumerate the pixels of those partsof the image and average the color of the areas. That's all.

    Comment


    • #17
      Thanks! We will have a team meeting to see where it goes. I'm thinking a color sensor might be the best bet at this point.

      Comment


      • #18
        What do you guys think is better this year, using Vuforia/OpenCV or a light sensor to follow the line? (This is specifically for getting to the beacon, not finding the color of the beacon)

        Comment


        • #19
          Originally posted by FTC8686 View Post
          What do you guys think is better this year, using Vuforia/OpenCV or a light sensor to follow the line? (This is specifically for getting to the beacon, not finding the color of the beacon)
          Ideally, a form of sensor fusion would occur. However, we are finding that computer vision works well as long as the robot moves slowly and smoothly enough for image detection to succeed reliably. (our OpenCV implementation does optimizations once it gets a "lock" on the image and keeps that lock as long as the target doesn't move too quickly).
          FTC6460 mentor (software+computer vision+electronics), FPGA enthusiast. In favor of allowing custom electronics on FTC bots.
          Co-founder of ##ftc live chat for FTC programming--currently you may need to join and wait some time for help--volunteer basis only.

          Comment


          • #20
            Originally posted by FTC8686 View Post
            What do you guys think is better this year, using Vuforia/OpenCV or a light sensor to follow the line? (This is specifically for getting to the beacon, not finding the color of the beacon)
            I wrote a demo program for an three wheel omni bot (that I built) that can approach, and position itself centered directly in-front of an image target and stop at 15 cm from the target.
            I modified the SDK vuforia example, and added a dozen lines of code to calculate the desired drive based on the robot and target position.

            I am by no means a vision processing expert, in fact I have no experience with vision processing.

            The bot can approach the target from about 6 feet away, and does not have to start out looking directly at the target. As long as it's in the viewfinder, it can be tracked.
            I'm still tweaking the robot path motion, but I will publish the code once it's ready for prime time.

            It looks pretty cool.

            Comment


            • #21
              Ok cool! Where are you guys position the phone on your robots? Isn't there a rule where the screen has to be clearly visible, or are you using the front camera?

              Comment


              • #22
                We mounted on the side in previous years, since our CV tracking algorithm yielded "left/right" movement data and it's much easier to just move the robot forward/backward (thus the image side to side) than to back up and reposition. For various reasons such as a nasty churro-covered mountain we did not use a holonomic drive.
                FTC6460 mentor (software+computer vision+electronics), FPGA enthusiast. In favor of allowing custom electronics on FTC bots.
                Co-founder of ##ftc live chat for FTC programming--currently you may need to join and wait some time for help--volunteer basis only.

                Comment


                • #23
                  Originally posted by FTC8686 View Post
                  What do you guys think is better this year, using Vuforia/OpenCV or a light sensor to follow the line? (This is specifically for getting to the beacon, not finding the color of the beacon)
                  How do you get OpenCV to work with Vuforia. I haven't been able to integrate OpenCV into the new code released recently. If you have got the latest code working with OpenCV can you provide examples?
                  Thanks in Advance!

                  Comment


                  • #24
                    Originally posted by Corban987 View Post
                    How do you get OpenCV to work with Vuforia. I haven't been able to integrate OpenCV into the new code released recently. If you have got the latest code working with OpenCV can you provide examples?
                    Thanks in Advance!
                    OpenCV and Vuforia are different ways of doing the same thing. Vuforia does one thing and does it well (image/object detection/tracking, including the whole pipeline of getting an image, processing it, and providing a result) while OpenCV is a library that contains tools that you can use on images you manually retrieve from the Android camera, and has higher flexibility to do things that Vuforia on its own does not do.
                    FTC6460 mentor (software+computer vision+electronics), FPGA enthusiast. In favor of allowing custom electronics on FTC bots.
                    Co-founder of ##ftc live chat for FTC programming--currently you may need to join and wait some time for help--volunteer basis only.

                    Comment

                    Working...
                    X