Announcement

Collapse
No announcement yet.

Vuforia Target Detection

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vuforia Target Detection

    Is there any way to improve detection of the Vuforia pictures? We have written a lot of code that works really well - IF it can see the target. Unfortunately it looks like the target can only be seen reliably within about 4 feet of the target and there is no scoring to be done within 4 feet of the target. Is it worth the investment to get a Samsung phone or webcam, or is the detection distance similar?

    We are using the Moto G4 camera and the targets were printed on a ink jet then put into clear page protectors & attached to outside of the perimeter wall.

  • #2
    I had lot of hopes this year using vuforia for autonomous path but team realized it is not worth of efforts and end up using sensors. I think better efforts could be spent on retraining tensorflow for detecting other objects than minerals but unfortunately FIRSt kept everything secret about it. I wish they open up model and provide some guidance on retraining

    Comment


    • #3
      I mean guidance on training new object detection

      Comment


      • #4
        FTC12676 The code and info on the TensorFlow training for FTC is available here: https://github.com/google/ftc-object-detection

        Comment


        • #5
          And info on training is available here: https://github.com/google/ftc-object...aster/training
          That does describe in depth how one might attempt to retrain TensorFlow to recognize other FTC objects, robots, field elements, etc.

          Comment


          • #6
            Originally posted by FLARE View Post
            Is it worth the investment to get a Samsung phone or webcam, or is the detection distance similar?
            No it's not worth it. Vuforia is only going to use up to 1080p anyway, so having a 4k camera isn't going to do you any good.

            Comment


            • #7
              Cheer4FTC - thanks. I will let team look at this and try retraining

              Comment


              • #8
                Are you making any progress with tensorflow training at https://github.com/google/ftc-object...aster/training? We've made a model, but have not yet trained it. I've gotten down to the last un-named step,

                "call the following, from the tensorflow directory:
                bazel run ... (buncha stuff)"

                Installing bazel looks like an major campaign. Have you gotten this far? Have you completed training your new model?

                Lead Coach, FTC 5197 "the GearHeads"

                Comment


                • #9
                  We have installed bazel, and succeeded in training a new model from the video supplied with ftc_app 4.3. Preliminary testing shows issues others have noted:
                  1. Picks up the colored tape under the Minerals as separate objects.
                  2. Picks up Minerals seen over the crater rim as separate objects.

                  Advantages seen over OpenCV:
                  1. Not affected by a wide range of lighting intensity.
                  2. Not affected by big yellow backgrounds.

                  Beware. The tutorial I mentioned in 2/23 post contains two show-stopper typos:
                  1. ouptut_arrays --> output_arrays
                  2. normalized_input_image_tenor --> normalized_input_image_tensor

                  Beware. Tutorial https://github.com/google/ftc-object...ObjectDetector has a show-stopper typo: asssemble --> assemble.

                  Things we will research further:
                  1. Color temperature and balance of lighting.
                  2. Better discrimination of false positive detections by size, aspect ratio, confidence.
                  3. Better starting video, shot from more realistic positions typical of a robot-carried phone camera.

                  Lead Coach, FTC 5197 "the GearHeads"

                  Comment


                  • #10
                    I wrote my own OpenCV pipeline after being disappointed with TensorFlow's reliability. It has not failed even once so far, out of 3 competitions (of 8 to 9 matches each), as well as countless practice runs. I specifically crafted it to be able to ignore the mass blob of yellow in the crater as well as the emcee walking around in bright yellow socks.

                    Comment


                    • #11
                      Our season is now over, but we got TensorFlow consistently working 100% in our last 2 tournaments. Rock solid.

                      I will share what we did. We used it to detect only the gold mineral. The problem with the models only affected the silver. There is absolutely no reason to care about silver at all. We affixed our phone solidly high up, and viewed the minerals near the top of the screen, so the top of the screen was midway up the crater wall.

                      We then returned the x position of the found gold mineral, and determined it's relative position in thirds relative to the phone camera resolution. Made the code much slimmer too.

                      The only error we had in the last 3 tournaments was one time 3 tournaments ago where the students failed to notice the phone screen went dark after they mounted the robot on lander before initialization. As other have noted, Vuforia needs to be in the forefront with an active awake screen or else it crashes TensorFlow.

                      Comment


                      • #12
                        We use the rear camera on our moto g4 play mounted high on the robot, tilted downward so only the mineral field is viewed. nothing in the crater can be seen at all.

                        Tensor Flow recognizes the gold and silver quickly (less than 1 second) 100% of the time with zero false positives. Can't do much better object recognition than that. We only modified the sample TFOD code to consider TWO minerals rather than three because of field of view limitations. We deduce that the gold mineral is on the third position if it isn't found in the two positions we can view.

                        It has NEVER made an incorrect object recognition in hundreds of trials since December.

                        I think this works so well due to the orientation of the camera with respect to the mineral field and the crater. We don't look "out", only "down", if that makes sense.

                        we switched to tensor flow object detection for mineral sampling from our first approach with a rev color sensor after our first meet in the December time frame. we easily won 4 of 5 matches but sampling worked but was not confidence inspiring. TFOD takes all the pressure off of near perfect dead-reckoning navigation.

                        We are looking forward to next week's Missouri State Championship with some of the nation's best rover ruckus robots.....weather permitting.. (most of our events were cancelled)
                        Contact me if I tell you more about how we use Tensor Flow Object Detection.

                        Russ Miller
                        Coach/Mentor of FTC team 9808

                        Comment


                        • #13
                          Our team is having trouble getting all 3 minerals in the picture with where we can mount the phone. We are thinking of either getting a wide angle lense for the phone or we just came up with the idea to have only 2 in the field and do basically what Russ Miller described. Has anyone tried the wide angle lense? Also how did you modify the code to detect just 2 minerals? Did you create a new object detector? Any guidance you can share would be greatly appreciated.

                          Comment


                          • #14
                            We briefly tried a wide angle,but it picked up too much from the crater so decided against it. Maybe if you spent some time on it, you could adjust so it works but we didn't pursue that path.

                            As long as you know which 2 mineral positions you are looking at, you can use figure out which position the gold is in. This could be a nice logic project for your newer programmer(s). Have them figure it out "in English" first - how could "I" look at only 2 of them and then know which of the 3 is gold? Then modify the code from ConceptTensorFlowObjectDetection to use that same logic.

                            Comment


                            • #15
                              Originally posted by FLARE View Post
                              We briefly tried a wide angle,but it picked up too much from the crater so decided against it. Maybe if you spent some time on it, you could adjust so it works but we didn't pursue that path.
                              I use a wide angle lens with a logitech C920 and a fully custom OpenCV pipeline. I also had the issue of picking up lots in the crater, but I specifically wrote the pipeline to be able to handle that. So far out of 3 competitions it has not failed to correctly identify the gold mineral.

                              Comment

                              Working...
                              X