Announcement

Collapse
No announcement yet.

Localization, Dead-Reckoning, Vuforia, encoder & gyro fusion?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Localization, Dead-Reckoning, Vuforia, encoder & gyro fusion?

    I realize I may be bringing this up at a time when the bulk of FTC teams in the world are disengaging from robotics for a few months, but are there any resources out there that have the collected wisdom of how to compute the position of the robot on the field over the timeframe of autonomous and even teleop?

    I worked with Terabytes 4149 this year and they developed a dead-reckoner that mostly uses encoder readings from the 4 wheels on a holonomic drive. It also uses the gyro in the Rev-Hub to compensate some wonkyness that we saw in angular position.

    I want to encourage them to share this work with the greater FTC community and I'd like to help them connect with others that have done similar work.

    Video of a drive around the lander (using some more code than just the dead-reckoner, but dead-reckoner was critical to this): https://www.youtube.com/watch?v=0cEn7Xv24IA

    Any thoughts on who else out there has tried this kind of thing?

  • #2
    Our team used the gyro, run to position, and distance sensor to accurately move during autonomous. In simple terms, while on lander we calibrated the gyro position, which is 45 degrees from any wall. We used TensorFlow to find the gold, navigate to knock it off via distance computed via run to position and angular turns via gyro. Then we ran over to the wall, turned, but then ran a calibrated distance along the wall, measuring distance before and after the calibration distance. Via trig, calculated the error of the calibration run versus the gyro position, and turned the robot to align parallel to the wall. Then finished up the autonomous run.

    We found that the lander placement by the tournament field crew was not very accurate, it is supposed to be 45 degrees but varied up to 7 degrees off, and saw a horizontal displacement error of a couple inches at least in both axis on some fields. Thus necessitating recalibration against the wall.

    One of the big problems of the FTC software environment is the inability to accurately use sensors. As I have posted before, using Java on an application platform is a bad idea for machine control, and teaches our future generation the wrong way to control things. In the current environment, we could only take one distance measurement before and one after. Since there is some jitter in the reported distance, the proper implementation of the wall calibration routine would be to take multiple samples, throw out high and low, and averaging to give a much better accuracy in the result. The distance sensors themselves (the IC's inside actually) have the ability to be used in a proper way (interrupt-driven sampling) it is simply that the Java environment doesn't lend itself to it.

    Comment


    • #3
      11343_Mentor I think our team saw some of the positional inaccuracies of Lander vs Perimeter. For the drives that they did to hit things like mineral sampling and getting in ("over" in their case) the crater, none of those inaccuracies mattered because the Lander was located relative to the field consistently -- lander to mineral was pretty consistent after accounting for the movement we'd have in the deadreckoner when the wheels arrived at the floor in an uneven way.

      It sounds like your team did a good job in exploiting the information they had available via sensors/encoders. Did you introduce them to any kind of higher level position control (running on the Java side) or just use the run-to-position thing?

      I hear what you're saying about some of the squirrelyness in cycle time and such. I see that there are better ways to do things too (controls and signals guy here: DSP FSK receivers, sensorless brushless motor control with high angular acceleration & big current swings).

      I also see that there are tradeoffs. An android phone provides a HUGE amount of processor to do these things in a software environment (Java) that is fast; that platform also has a GPU on it to do machine vision stuff. And it's cheap too (team still ruining with a bunch of $20 ZTE Speeds from first year of Android) The RevHub integrates a bunch of pieces into a hardware platform that should allow good partitioning between the tighter control time constraints (position, speed, current??) and the higher level parts of motion planning. Java makes it really hard to crash the whole machine in a way that's opaque -- Embedded C, bare metal not so much.

      11343_Mentor I've appreciated that you too see some of the limitations and troubles inflicted by the software architecture.

      Comment


      • #4
        11343_Mentor What sensor you are using for distance?

        Comment


        • #5
          Nice 'diamond' movement.

          We did not use dead reckoning but we used a combination of IMU orientation (which is really more and less than a gyro) from the IMU built into the REV Expansion hub, distance sensors (both Rev 2M and MR) and encoders for distance / turn angle.

          When the robot 'lands' we use motor encoders to determine the drive distance, and when turning the turn angle. We use the IMU orientation to give us the magnetic heading. We found that the IMU was pretty good at keeping the heading, but was slow to update when we turned, so we used the IMU to determine our direction and how far we had to turn, and then the encoders to manage the actual turn.

          When we got to the depot we parked at approximately 45 degrees to the two corner walls and used an MR distance sensor to determine how far to drive to get to the wall.

          Whenever we were next to a wall we used the motor encoder to determine the drive distance, but we had two REV 2M distance sensors, one at the front and one at the back of the robot. which we read to determine how far we were from the wall, and our angle relative to the wall. With this information we used a PID controller to adjust the power ration between the left and right motors to steer us to the correct distance from the wall as we moved along the wall. The readings were noisy but averaged out in the robot motion.

          We found that sensor read times were a problem, especially with the REV 2M sensors:
          Writing to a motor is instant if you write the same value as previously, otherwise about 4 mSecs
          Reading motor encoders takes about 3 to 4 mSecs per encoder
          Reading the IMU can take between 10 and 25 mSecs
          Reading a MR distance sensor takes between 10 and 30 mSecs
          Reading a Rev 2M distance sensor takes between 20 to 50 mSecs

          We usually used PID controllers to manage the motors, but it was harder when driving along the wall because of the long cycle time to read two Rev 2M distance sensors.

          The REV 2M distance sensors had good resolution but we had to calibrate them individually to get the correct 0 offset which varied by over an inch. They were useful because of their resolution and narrow beam, so we used it on the sides of the robot for wall following.

          The MR distance sensor had a much wider beam angle and less resolution, so we used it when driving out of the depot where we did not want the angle to change the measurement.

          The speed of the JAVA code was not an issue, with Tensor Flow to determine the mineral position.

          Comment


          • #6
            FTC12676 We used the REV 2M.

            Nicks Nice writeup, very similar to our approaches. I should clarify that when I said gyro, we in fact were using the IMU in the Rev hub. However, we did not see an issue with the slow update on the turns. We used a log PID routine that had the final movement so slow during turns that it was really accurate. If you try to end the PID too fast, you cannot get good ending positions. We initially tried to use PID routines coupled with the Rev distance sensors to follow the wall, but that was a disaster due to the slow update. We found that using the before and after readings with a calibrated distance got us extremely accurate coordinate along the wall. Simple trig formula to determine the error, then added it to the IMU values to get a new heading that was parallel to the wall. (this was done prior to setting the marker)

            Then we simply used that heading to reverse and follow the heading back to crater. It took us a lot of trial and error to come up with a PID control to accurately follow the heading. The key is to have some hysteresis built into the routine. With the right amount dialed in, our robot followed the heading without any perceptible wandering about the heading.

            Comment


            • #7
              Nicks and 11343_Mentor : Cool to see that your teams did use the Rev 2M sensor. I saw that. It looked really cool. Did you see any problems with it's response when it was pointed at the wall of the field?

              Nicks Those times sound like things our team saw when we went looking for where all of our loop cycle time went. I think there is an argument that the comms with the Rev hub are architected the wrong way. Something like a subscription model and all of the data exchanged in one (or a pair) of bulk transfers {Android->RevHub, RevHub->Android} per loop cycle.

              11343_Mentor It sounds like the way you compensated for the crazy loop time jitter was by making the sample rate slow. That way, the fixed jitter is a smaller fraction. The down side is that Nyquist says you're gonna limit the bandwidth, and hence response time and speed, of the machine. Maybe, "Jittery loop times make for boring autonomous competition."

              Comment


              • #8
                ftcterabytes Yes, exactly, where there was jitter, we slowed things down to "filter" the error. As a consequence, our autonomous routines ended with maybe a second to spare.

                Comment


                • #9
                  11343_Mentor Yup. Saw that slow, not time to spare, auton. Saw the kids worried. Saw the other mentor worried. Our trouble was a badly tuned controller, which was probably the long sample period.

                  Comment


                  • #10
                    ftcterabytes in limited testing we saw a 48 inch range perpendicular to the wall, but it reduced with the incident angle to the wall. When the sensor was 24 inches (perpendicular) from the wall it could read +/- 30 degrees either side or perpendicular before failing.
                    These are great things for the team to experiment with next year and put into the notebook.

                    Comment


                    • #11
                      Nicks great to hear some test results that confirm things I've seen. I messed around with an eval board for that sensor (not through Rev 2M) and found that it can get you a distance measurement of something from behind the plastic of the walls, depending on the incident angle! It's like the stuff I learned about geometric optics in physics actually works!

                      Comment

                      Working...
                      X