Announcement

Collapse
No announcement yet.

Grabbing Frames from Vuforia for analysis

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Grabbing Frames from Vuforia for analysis

    Hi

    I was reading this bug report
    https://github.com/ftctechnh/ftc_app/issues/187
    and I am interested to know how to do this better. I have modified the code as described, but I am not sure exactly how to grab the frame. Over the break we managed to learn OpenCV to analyse frames and determine the RED/BLUE status of beacons and would like to think what we learn't over the summer was not wasted.
    I am assuming I can take a frame from Vuforia and pass it into OpenCV and do the color analysis. If this can be done some other way can somebody share? Any help, even nudging in the right direction would be appreciated.

    Thanks

  • #2
    Originally posted by Corban987 View Post
    Hi

    I was reading this bug report
    https://github.com/ftctechnh/ftc_app/issues/187
    and I am interested to know how to do this better. I have modified the code as described, but I am not sure exactly how to grab the frame. Over the break we managed to learn OpenCV to analyse frames and determine the RED/BLUE status of beacons and would like to think what we learn't over the summer was not wasted.
    I am assuming I can take a frame from Vuforia and pass it into OpenCV and do the color analysis. If this can be done some other way can somebody share? Any help, even nudging in the right direction would be appreciated.

    Thanks
    Check out our video on how to do it until there is an official update
    https://www.youtube.com/watch?v=P4q5LaN3DG4

    You can definately take this Bitmap and convert it to a Mat. We have done this already

    Hope this helps

    Comment


    • #3
      Originally posted by FTC3491 View Post
      Check out our video on how to do it until there is an official update
      https://www.youtube.com/watch?v=P4q5LaN3DG4

      You can definately take this Bitmap and convert it to a Mat. We have done this already

      Hope this helps
      Wow, you guys are smart! Thanks for the video. Took a while to work through but I think I get it. I will now work with merging in OpenCV. Hope you don't mind if I have more questions going forward? We are running behind on our build now so I haven't been able to pursue this further this week, hopefully I can catch up in a few days!

      Comment


      • #4
        Originally posted by FTC3491 View Post
        Check out our video on how to do it until there is an official update
        https://www.youtube.com/watch?v=P4q5LaN3DG4

        Hope this helps
        Keep going with this. Your team is creating a valuable resource for the FTC community! All the videos are nicely done and easy to understand.

        Comment


        • #5
          Thank you so much. These videos really helped us understand Vuforia. We were wondering how to implement the FTC Vision Library into the new FTC SDK for this year. We were able to do this last year but we cant figure out how to do it this year. It would be great if we could take frames from Vuforia and send it to OpenCV to analyze.

          Comment


          • #6
            Originally posted by FTC3491 View Post
            Check out our video on how to do it until there is an official update
            https://www.youtube.com/watch?v=P4q5LaN3DG4

            You can definately take this Bitmap and convert it to a Mat. We have done this already

            Hope this helps
            You guys are awesome - thanks for sharing!!!

            Comment


            • #7
              Hi FTC3491 I am having a little trouble converting this to a normal OpMode rather than linearopmode. I would prefer to use opmode as we have most of our code in a state engine and I don't really want to convert it now. Hopefully you can help. PM me if you want. Thanks.

              Comment


              • #8
                Originally posted by Corban987 View Post
                Hi FTC3491 I am having a little trouble converting this to a normal OpMode rather than linearopmode. I would prefer to use opmode as we have most of our code in a state engine and I don't really want to convert it now. Hopefully you can help. PM me if you want. Thanks.
                Purely as a side comment... you can still run all your state engine code in a LinearOpMode, simply by putting a while(opModeIsActive()) { } loop in the linearOpMode.runOpmode method and treating it like the opmode.loop() method. Whatever you used to put in the loop() call, put inside the while loop. Whatever you put in the init() method, put before the while loop, and whatever you put in the stop() call, put after the loop.

                Bada bing!

                Comment


                • #9
                  Projecting points and cropping images

                  Originally posted by FTC3491 View Post
                  Check out our video on how to do it until there is an official update
                  https://www.youtube.com/watch?v=P4q5LaN3DG4

                  You can definately take this Bitmap and convert it to a Mat. We have done this already

                  Hope this helps
                  Hi FIXIT3491! Thanks for your helpful video.

                  Your example seems to refer to projecting points onto the camera image, using the corners of the target image, which makes sense for demonstration purposes.

                  But wouldn't we want to use some kind of translation from the target image to the beacon that's located "above" it? I would assume you'd want to crop the image of the beacon for analysis, rather than the target image itself. Or did I miss the translation somewhere in there?

                  Comment


                  • #10
                    Thanks philbot - I feel stupid not thinking about that! Sometimes the obvious is just too obvious.
                    Maybe it was because we had a lot of issues with linear last year we were trying to stay away from it. I have most of it now in a linear opmode but I started re writing many aspects.

                    So to the Vuforia and grabbing images, I am still trying this with not much luck
                    I did merge in the beacon code to get the position information from the Vuforia sample code, I also updated with the latest from the youtube video. If I comment out the grabbing image code everything runs fine.


                    Code:
                            waitForStart();
                    
                            velocityVortex.activate();
                            Image rgb = null;
                    
                            while (opModeIsActive()) {
                    
                    //            VuforiaLocalizer.CloseableFrame frame = vuforia.getFrameQueue().take(); //takes the frame at the head of the queue    <-----  If I uncomment this the code crashes at about 5 seconds????
                    
                    //            long numImages = frame.getNumImages();
                    //
                    //            for (int i = 0; i < numImages; i++)
                    //            {
                    //                if (frame.getImage(i).getFormat() == PIXEL_FORMAT.RGB565)
                    //                {
                    //                    rgb = frame.getImage(i);
                    //                    break;
                    //                }
                    //            }
                    
                                /*rgb is now the Image object that we’ve used in the video*/
                    
                                for (VuforiaTrackable beac : velocityVortex) {
                    
                                    OpenGLMatrix pose = ((VuforiaTrackableDefaultListener) beac.getListener()).getRawPose();
                    
                                    if (pose != null) {
                    
                                        Matrix34F rawPose = new Matrix34F();
                                        float[] poseData = Arrays.copyOfRange(pose.transposed().getData(), 0, 12);
                                        rawPose.setData(poseData);
                    
                                        Vec2F upperLeft = Tool.projectPoint(vuforia.getCameraCalibration(), rawPose, new Vec3F(-127,92,0));
                                        Vec2F upperRight = Tool.projectPoint(vuforia.getCameraCalibration(), rawPose, new Vec3F(127,92,0));
                                        Vec2F lowerLeft = Tool.projectPoint(vuforia.getCameraCalibration(), rawPose, new Vec3F(127,-92,0));
                                        Vec2F lowerright = Tool.projectPoint(vuforia.getCameraCalibration(), rawPose, new Vec3F(-127,-92,0));
                    
                                    }
                    
                                }
                                for (VuforiaTrackable trackable : allTrackables) {
                                    /**
                                     * getUpdatedRobotLocation() will return null if no new information is available since
                                     * the last time that call was made, or if the trackable is not currently visible.
                                     * getRobotLocation() will return null if the trackable is not currently visible.
                                     */
                                    telemetry.addData(trackable.getName(), ((VuforiaTrackableDefaultListener)trackable.getListener()).isVisible() ? "Visible" : "Not Visible");    //
                    
                                    OpenGLMatrix robotLocationTransform = ((VuforiaTrackableDefaultListener)trackable.getListener()).getUpdatedRobotLocation();
                                    if (robotLocationTransform != null) {
                                        lastLocation = robotLocationTransform;
                    
                                    }
                                }
                                /**
                                 * Provide feedback as to where the robot was last located (if we know).
                                 */
                                if (lastLocation != null) {
                                    // Then you can extract the positions and angles using the getTranslation and getOrientation methods.
                                    VectorF trans = lastLocation.getTranslation();
                                    Orientation rot = Orientation.getOrientation(lastLocation, AxesReference.EXTRINSIC, AxesOrder.XYZ, AngleUnit.RADIANS);
                                    // Robot position is defined by the standard Matrix translation (x and y)
                                    robotX = trans.get(0);
                                    robotY = trans.get(1);
                    
                                    // Robot bearing (in Cartesian system) position is defined by the standard Matrix z rotation
                                    robotBearing = rot.thirdAngle;
                    
                                    telemetry.addData("Pos X ", robotX);
                                    telemetry.addData("Pos Y ", robotY);
                                    telemetry.addData("Bear  ", robotBearing);
                                    //  RobotLog.vv(TAG, "robot=%s", format(lastLocation));
                                    telemetry.addData("Pos   ", format(lastLocation));
                                } else {
                                    telemetry.addData("Pos   ", "Unknown");
                                }
                                telemetry.update();
                            }
                        }

                    Comment


                    • #11
                      Shouldn't the image grabbing code go under the localizerimplsubclass that extends the impl class? I haven't had a chance to test this yet. And in order to analyze the frame, would we need some object detection code to isolate both the beacon and the tracking images and then pick out the color for the left/right side?
                      Code:
                      import com.qualcomm.robotcore.util.RobotLog;
                      import com.vuforia.Frame;
                      import com.vuforia.Image;
                      import com.vuforia.PIXEL_FORMAT;
                      import com.vuforia.State;
                      import com.vuforia.Vuforia;
                      import org.firstinspires.ftc.robotcore.external.navigation.VuforiaLocalizer;
                      import org.firstinspires.ftc.robotcore.internal.VuforiaLocalizerImpl;
                      
                      
                      // A hack that works around lack of access in v2.2 to camera image data when Vuforia is running
                      // Note: this may or may not be supported in future releases.
                      public class VuforiaLocalizerImplSubclass extends VuforiaLocalizerImpl {
                      
                          /** {@link CloseableFrame} exposes a close() method so that we can proactively
                           * reduce memory pressure when we're done with a Frame */
                      
                          public Image rgb;
                      
                          public class CloseableFrame extends Frame {
                              public CloseableFrame(Frame other) { // clone the frame so we can be useful beyond callback
                                  super(other);
                              }
                              public void close() {
                                  super.delete();
                              }
                          }
                      
                          public class VuforiaCallbackSubclass extends VuforiaLocalizerImpl.VuforiaCallback {
                      
                              @Override public synchronized void Vuforia_onUpdate(State state) {
                                  super.Vuforia_onUpdate(state);
                                  // We wish to accomplish two things: (a) get a clone of the Frame so we can use
                                  // it beyond the callback, and (b) get a variant that will allow us to proactively
                                  // reduce memory pressure rather than relying on the garbage collector (which here
                                  // has been observed to interact poorly with the image data which is allocated on a
                                  // non-garbage-collected heap). Note that both of this concerns are independent of
                                  // how the Frame is obtained in the first place.
                                  CloseableFrame frame = new CloseableFrame(state.getFrame());
                                  RobotLog.vv(TAG, "received Vuforia frame#=%d", frame.getIndex());
                      
                                  long numImages = frame.getNumImages();
                      
                                  for (int i = 0; i < numImages; i++) {
                                      if (frame.getImage(i).getFormat() == PIXEL_FORMAT.RGB565) {
                                          rgb = frame.getImage(i);
                                          break;
                                      }
                                  }
                      
                                  frame.close();
                              }
                          }
                      
                          public VuforiaLocalizerImplSubclass(VuforiaLocalizer.Parameters parameters) {
                              super(parameters);
                              stopAR();
                              clearGlSurface();
                      
                              this.vuforiaCallback = new VuforiaCallbackSubclass();
                              startAR();
                      
                              // Optional: set the pixel format(s) that you want to have in the callback
                              Vuforia.setFrameFormat(PIXEL_FORMAT.RGB565, true);
                          }
                      
                          public void clearGlSurface() {
                              if (this.glSurfaceParent != null) {
                                  appUtil.synchronousRunOnUiThread(new Runnable() {
                                      @Override public void run() {
                                          glSurfaceParent.removeAllViews();
                                          glSurfaceParent.getOverlay().clear();
                                          glSurface = null;
                                      }
                                  });
                              }
                          }
                      }

                      Comment


                      • #12
                        I worked it out.
                        took a very long time to work it out but this is the function

                        Code:
                        //first create a bitmap
                        Bitmap bm = Bitmap.createBitmap(rgb.getWidth(), rgb.getHeight(), Bitmap.Config.RGB_565);
                        //then load the image into it
                        bm.copyPixelsFromBuffer(rgb.getPixels());
                        now use opencv to do the rest

                        To fix the crashing issue, after processing the image
                        Code:
                        frame.close();

                        Comment

                        Working...
                        X