Announcement

Collapse
No announcement yet.

Vuforia Target Tracking example

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Philbot
    started a topic Vuforia Target Tracking example

    Vuforia Target Tracking example

    This year, the availability of the Vuforia Imaging software can make target tracking really easy.

    But, so far, I've been surprised how few teams have come to competition with any form of image processing on-board.

    So it seems that I needed to create a Demo program and video tutorial to show how easily it can be done.

    This YouTube tutorial https://www.youtube.com/watch?v=AxKrJEtfuaI is a detailed code walk through of such a demo opmode.

    The code is online at https://github.com/gearsincorg/FTCVuforiaDemo

    You can drop this code folder into any FTC_SDK project and use it with an Omni-robot.
    It's built for a 3-Wheel omni-bot, but the video shows how and where to adapt it to a 4-wheel omni or mecanum drive.

    The demo shows how to extract image location information, and one way to use this info to navigate to a target.

    This approach has made our auto and teleop navigation very successful so far, so it seemed appropriate to share it with everyone.

    Hope this is useful.

    Phil.

  • FLARE
    replied
    Thank you for these videos!

    We have set up the Rover Ruckus targets using OpenGLMatrix & are able to find the correct coordinates for our robot pretty much anywhere on the field. We have modified your sample code for a mecanum drivetrain & are trying to navigate to a given position on the field. It does eventually get there, but circles several times before finally stopping.

    I believe the issue is in the gain values. We have spent several meetings using guess & check on the gains, but can't really get it to go straight where we want and then stop nicely. Can you offer some insight on how to compute the proper gains to prevent this circling?

    Leave a comment:


  • Inventer bots
    replied
    I'd like to hear if anyone has been able to do accurate tracking of the center vortex position with Vuforia or otherwise. That would get their attention.
    Team 5975 CYBOTS uses a camera to track the Center Vortex.

    Leave a comment:


  • Philbot
    replied
    Originally posted by PeterMnev View Post
    Our team 8121 successfully used Vuforia for autonomous mode this year. We used it for positioning in front of the beacons. Here is a video from the Maryland State.

    Vuforia has the following benefits:
    • Easy to use
    • Usable from any position on the field, granted the phone can see the targets

    Though there are a few disadvantages we need to work around:
    • There is a delay with initial detection, especially if robot is moving too quickly
    • The coordinates are reported with an about 400ms delay. This is on the ZTE Max and may be different on other phones.
    • Tracking without seeing the image is not useful for precise positioning in front of the beacons. It tracks the background which is far beyond the actual target.

    I will try to publish the code and description when I have the time to do so.
    That's a great Auto. Sorry we didn't get to play with/against you at MD State.

    Phil (2818 G-FORCE)

    Leave a comment:


  • PeterMnev
    replied
    Our team 8121 successfully used Vuforia for autonomous mode this year. We used it for positioning in front of the beacons. Here is a video from the Maryland State.

    Vuforia has the following benefits:
    • Easy to use
    • Usable from any position on the field, granted the phone can see the targets

    Though there are a few disadvantages we need to work around:
    • There is a delay with initial detection, especially if robot is moving too quickly
    • The coordinates are reported with an about 400ms delay. This is on the ZTE Max and may be different on other phones.
    • Tracking without seeing the image is not useful for precise positioning in front of the beacons. It tracks the background which is far beyond the actual target.

    I will try to publish the code and description when I have the time to do so.

    Leave a comment:


  • Philbot
    replied
    Originally posted by DanOelke View Post
    In my sandbox I did play with that. It was pretty cool that after it found a target I could turn and move quite a bit and it was still tracking where the image was. I didn't quantify it very well but it did seem to have accumulated error over time and the more you moved with the target not in view.
    Yes, it can only do so much without actually knowing the physical relationships between the images it sees, but it is great for extending the tracking if the image comes in and out of frame.

    Leave a comment:


  • Philbot
    replied
    Originally posted by rvansmith View Post
    I used the front and rear camera of the ZTE speed, and I got better range out of the rear camera than the front by somewhere between 1-2'.
    Here's an interesting this.....

    I also felt that I got better range with rear facing camera than the front facing camera. (I was using the Moto G2 and Motor G4).

    However, I was just recently told that Vuforia is downsizing the image to 640x480 prior to processing.

    Based on my own usage, and now your corroborating test, I'm having trouble believing this.
    I'm also having trouble seeing how a 640x480 image has sufficient resolution to see the target details at the distance of 3-4 feet.

    I think this needs some more investigation.

    Leave a comment:


  • rvansmith
    replied
    Originally posted by DanOelke View Post
    In my sandbox I did play with that. It was pretty cool that after it found a target I could turn and move quite a bit and it was still tracking where the image was. I didn't quantify it very well but it did seem to have accumulated error over time and the more you moved with the target not in view.

    Phil - did you ever play with a phone (like the S5) with a higher resolution camera? Does it have greater range it can detect the targets from?
    I used the front and rear camera of the ZTE speed, and I got better range out of the rear camera than the front by somewhere between 1-2'.

    Leave a comment:


  • DanOelke
    replied
    This setting actually lets the software track BEYOND the target image. It uses changes in the general scene to continue tracking, and it's often possible to rotate a full 90 degrees off a target and still know your position. I haven't tried it with running over the top of a target image, and it may not work as well, (based on the plainness of the beacon), but it's worth a try.
    In my sandbox I did play with that. It was pretty cool that after it found a target I could turn and move quite a bit and it was still tracking where the image was. I didn't quantify it very well but it did seem to have accumulated error over time and the more you moved with the target not in view.

    Phil - did you ever play with a phone (like the S5) with a higher resolution camera? Does it have greater range it can detect the targets from?

    Leave a comment:


  • Philbot
    replied
    Originally posted by DanOelke View Post
    A few weeks ago my team also tried using vuforia to locate particles or the center vortex. They got a complete scan of a particle which was harder than I thought it should be, but when testing that scan it was unable to find the particle - even leaving it exactly where it was scanned.

    The center vortex was a bit of a challenge since it's larger than they expect, but they got a scan by placing the vortex upside down on the playing field. This was to keep the orientation the same as a robot looking up towards the vortex. Was not able to get a full 360 scan, but was able to find the vortex occasionally when testing the scan. However as soon as the vortex was moved or the lighting changed at all was no longer able to find the center vortex.

    Since we have a side-pusher and already have scheme for finding the lines and wall using color sensors and range sensor using vuforia for those targets wasn't really useful.

    With vuforia - I could imagine a future game with something like the center structure from Cascade Effect having an image on each of it's 4 sides. That could be useful for robots wanting to always know where they are on the field.
    From what I can tell of Vuforia, it's looking for "features" it has identified in the pre-scanned images. The assumption is that these features do not change relatvie to each other under normal circumstances, and so any change in their appearance is used to determine the location of the viewer.

    This works great with fixed images, even ones wrapped around a regular shape (like a box or can), but anything that has true 3D features (like a wiffle ball or vortex) will give completely different featured depending on the viewing angle, so it's extremely hard to use for spacial location.

    Had the targeting been incorporated into the game earlier, there could have been an image placed underneath the vortex, or on the vertical support....

    As it is, I'm really glad they incorporated them under the beacons. This way there are at least two means to track to the center or each... image or white line.


    Maybe next time.

    Leave a comment:


  • Philbot
    replied
    Originally posted by FTC8913 View Post
    Hello, thank you so much for providing the example! We may have missed this, but in our case, the phone is mounted rather high on the front of the robot, and as the robot gets close to the image, the image actually goes out of the frame and basically, it loses track of the image/location making the isVisible() to be false. If we had the phone mounted close to the ground, losing the image probably didn't happen, but we can't make the change now as our competition is this weekend.
    The Vuforia targets were added to the game late in the process, so their location isn't necessarily optimal.
    We also had a struggle to locate the camera in a good position. but ultimately with the MOTO G camera flipped so the camera was at the bottom, and with the phone as low as possible, we can see the images up to the end.

    One thing you could try to extend your tracking is to enable the "Extended" mode of vuforia.

    Find this line in the Vuforia initialization code, and change false to true.

    Code:
           
     parameters.useExtendedTracking = false;
    This setting actually lets the software track BEYOND the target image. It uses changes in the general scene to continue tracking, and it's often possible to rotate a full 90 degrees off a target and still know your position. I haven't tried it with running over the top of a target image, and it may not work as well, (based on the plainness of the beacon), but it's worth a try.

    Leave a comment:


  • DanOelke
    replied
    A few weeks ago my team also tried using vuforia to locate particles or the center vortex. They got a complete scan of a particle which was harder than I thought it should be, but when testing that scan it was unable to find the particle - even leaving it exactly where it was scanned.

    The center vortex was a bit of a challenge since it's larger than they expect, but they got a scan by placing the vortex upside down on the playing field. This was to keep the orientation the same as a robot looking up towards the vortex. Was not able to get a full 360 scan, but was able to find the vortex occasionally when testing the scan. However as soon as the vortex was moved or the lighting changed at all was no longer able to find the center vortex.

    Since we have a side-pusher and already have scheme for finding the lines and wall using color sensors and range sensor using vuforia for those targets wasn't really useful.

    With vuforia - I could imagine a future game with something like the center structure from Cascade Effect having an image on each of it's 4 sides. That could be useful for robots wanting to always know where they are on the field.

    Leave a comment:


  • korimako
    replied
    Originally posted by Philbot View Post
    This year, the availability of the Vuforia Imaging software can make target tracking really easy.

    But, so far, I've been surprised how few teams have come to competition with any form of image processing on-board.
    Just to add to your data points... Our team fired up the vuforia demos and were excited to see the targets with orientation marks on them. They thought to maybe track particles or the center vortex... The beacon is the same problem as last year which had already been solved. Scanning a particle was next to impossible with the vuforia app in our hands, (I think because of all the holes in the balls?) and the kids deemed it overkill and unnecessary for finding the beacons. Also mounting of the phone, since no additional cameras are allowed, was a big hurdle.


    I think had there been some targets on the center vortex, you might have seen more teams using it. It also seem like allowing a usb camera would also be helpful so the geometry of having the phone available wouldn't limit things so much. 2c

    Leave a comment:


  • FTC8913
    replied
    Hello, thank you so much for providing the example! We may have missed this, but in our case, the phone is mounted rather high on the front of the robot, and as the robot gets close to the image, the image actually goes out of the frame and basically, it loses track of the image/location making the isVisible() to be false. If we had the phone mounted close to the ground, losing the image probably didn't happen, but we can't make the change now as our competition is this weekend.

    Anyhow, our initial approach was to just switch to other sensors (like distance sensors) to get to the beacon once the Vuforia loses the image (and the robot is close to the location it lost the image after navigating). We also tried to look around the image (if the image should still fit in the image frame) in case the image gets lost for some reasons.

    Would you suggest looping through slight rotating movement (left or right or both), stop for a bit and try to acquire the image in case the robot loses the image somehow? Or do you have other suggestions of what to do after the phone camera loses the image?

    In the end after many testing, it seems to our group that the Vuforia is not reliable enough in autonmous other than the initial image/robot localization, and we have relied back to light/distance sensors to get to the beacon.

    Thank you again.

    Leave a comment:


  • Philbot
    replied
    Originally posted by gbailey View Post
    I think it'd be helpful to provide a simpler sample opmode that just shows positioning of the chosen target relative to the phone's camera, providing both translation and rotation. I get that it can be useful to establish your absolute field position by doing a bunch of transformations (as the provided sample opmode does), but the simpler calculations that involve (X,Y,Z) positioning from the phone's perspective is easier to calculate and perform turning and driving operations with. (i.e. using Math.atan2 for turn angle and Z position for distance from target)
    You have pretty much asked for exactly what my posted sample demonstrates.

    We didn't use the "actual" locations of the images on the field. Instead we used a common location at the origin of the field.
    This then enabled us to know "where the robot is relative to the target" (not the field), which is essentially the same as "where the target is relative to the robot".

    The Math in this example is just one hypotenuse calculation and one asin() to get range and bearing to target.

    One thing to note is that we found that it is important to let Vuforia know the position of the camera on the robot. (rather than just using the camera's perspective)
    This helps with navigation because our robot pivots on it's center point, and as it does, the camera (which is located at the front of the robot) will swing to the left or right.
    By letting Vuforia know where the camera is, it can take this translation into account, so it doesn't change the robot's "position" just because the image moves laterally.

    It's subtle, but it became very important as we got very close to the target image for beacon claiming.

    Phil.

    Leave a comment:

Working...
X