Announcement

Collapse
No announcement yet.

Gyro Read Latency... Development team update.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    @Philbot
    I still question why so much slow down when adding devices. Isn't there plenty of bandwidth on USB 2 to handle the relatively small amount of data being transferred from these sensors?
    Do the controllers and CDIM "constantly" send data without being requested?
    Have you looked at similar measurements for the update rate for encoders? We have 1 Gyro, 1 color sensor, 1 Servo controller with 2 devices, and 3 motor controllers (2 with encoders - and only 1 that uses the encoders during driving). Does adding the each device/controller incur a penalty? Even when no commands or requests are being made?
    The time doubled when you added the motor controllers and servo controller while disabling the color sensors. Is there a way to "disable" the non-drive motor controllers and servo controllers during certain portions to free up time for power commands and encoder returns?
    40Hz seems like a pretty big constraint on control loops!

    Comment


    • #77
      Note that the USB bus is not likely the most significant limiting factor for I2C sensors. The I2C 'bus' in the CDIM is run at 100 kbits/sec (I2C 'standard mode') per the MR specs for the device. So, that data rate must be shared across all the I2C sensors attached to the CDIM. As you say, once you hit the USB bus, much more bandwidth is available, so why everything else seems slow is not obvious to me.

      Comment


      • #78
        Originally posted by 5294-jjkd View Post
        Note that the USB bus is not likely the most significant limiting factor for I2C sensors. The I2C 'bus' in the CDIM is run at 100 kbits/sec (I2C 'standard mode') per the MR specs for the device. So, that data rate must be shared across all the I2C sensors attached to the CDIM. As you say, once you hit the USB bus, much more bandwidth is available, so why everything else seems slow is not obvious to me.
        Maybe it's a limitation of the Android phone's USB chipset, being in OTG mode?

        Comment


        • #79
          Originally posted by FTC7253 View Post
          @Philbot
          I still question why so much slow down when adding devices. Isn't there plenty of bandwidth on USB 2 to handle the relatively small amount of data being transferred from these sensors?
          My "limited" understanding here is that it's not a data transmission issue, it's a communications setup/breakdown time issue.
          Yes, I2C and USB can transmit many 100 thousands /millions bits per seconds once an active USB connection has been established, and data is flowing between the Phone and a specific device.
          But, the RC Phone and the various attached devices aren't/cant make full use of this bandwidth.

          Each transaction with a CORE device (or in the case of the DIM, each I2C device) is performed as a transaction which has a definate fixed overhead associated with it.
          It's this overhead (which appears to be largely independant of the USB or I2C data rate) is the main contributing factor to the overall transaction rates.
          ie: it wouldn't matter if the volume of data in a packet was largely increased or decreased, the cycle times would be very similar.

          I don't know how much of this delay is a function of the interfacing methods used, or something embedded into Android.

          With Android, some things that you would expect to be fast, seem to have strange overheads....
          eg: I was trying to use the phone's camera LED as an indicator, only to find that it added 100 mSec latencies to the cycle times. And this was completely within the Android SDK.

          So... just explaining why for the moment the data rates don't justify the measured cycle times.

          Comment


          • #80
            Originally posted by Philbot View Post
            Hi All....

            This post specifically addresses the issues raised about how quickly (slowly) gyro updates are available to an opmode.
            It does not address ODS issues that may or may not be related.

            The planned schedule is to migrate the current 2.4 Beta to release status this Monday.
            At the same time (or very soon afterwards) a new 2.5 beta will be made available on the GitHub site.

            There are three topics I wanted to discuss here.

            1) There were some inefficiencies in the MR Gyro I2C handling code which were slowing down the transfer of fresh heading information from the Gyro.
            These inefficiencies translated into a 100% overhead. So, once they were removed, the data latency was reduced to about 50%.
            These changes have already been implemented in the forthcoming 2.5 beta.

            2) Since any device which exists in the robot configuration has the effect of adding to the overall data latency, there are gains to be realized by "disabling" any devices not currently being used. (thnaks to other posters on this forum for this idea) In the case of this year's game, this definitely includes the MR color sensors that aren't needed for normal driving, only for beacon identification.

            The dev. team sees a real advantage to providing an API to enable and disable sensors to improve performance, but this requires further coding and testing, so it's also unlikely to be included in the 2.5 beta, but it is possible for teams to selectively implement this same process in their own opmodes. (see end of post)

            3) There may be some small gains in I2C efficiency that can be gained from a tweak of the general I2C interface to utilize block data transfers.
            These changes are still under review, and will probably not be implemented until after the 2.5 Beta.

            To give you an idea what can be expected...

            The following Gyro latency times indicate the improvements that have be obtained from methods 1 and 2.
            These times indicate how many mSec between changing Gyro values.

            Configuration: 1 Gyro
            (27 mSec) SDK 2.3 and 2.4
            (13 mSec) SDK 2.5

            Configuration: 1 Gyro, 2 color sensors
            (80 mSec) SDK 2.3 and 2.4
            (25 mSec) SDK 2.5
            (14 mSec) SDK 2.5 with Color sensors disabled while driving.


            Configuration: 1 Gyro, 2 color sensors, 4 Motor Controllers, 1 Servo Controller.
            (170 mSec) SDK 2.3 and 2.4
            (65 mSec) SDK 2.5
            (25 mSec) SDK 2.5 with Color sensors disabled while driving.

            So your specific improvments will depend on what you have on your robot and whether you can disable other sensors while driving.

            The following code shows how the color sensors can be initailized, and then subsequently enabled and disabled.


            Code:
            // Declare Color Sensor objects (and other items required to enable/disable)
            ModernRoboticsI2cColorSensor  leftColor = null;
            ModernRoboticsI2cColorSensor  rightColor = null;
            
            I2cAddr leftColorAddress  = I2cAddr.create8bit(0x3c);
            I2cAddr rightColorAddress = I2cAddr.create8bit(0x4c);
            
            I2cController   leftColorController;
            I2cController   rightColorController;
            
            I2cController.I2cPortReadyCallback leftColorCallback;
            I2cController.I2cPortReadyCallback rightColorCallback;
            
            boolean colorSensorsDisabled = false;
            Code:
            /***
            * Initialize two color sensors, and take a copy of callback handlers for later use.
            */
            public void colorInit() {
            	leftColor   = myOpMode.hardwareMap.get(ModernRoboticsI2cColorSensor.class, "left color");
            	leftColor.setI2cAddress(leftColorAddress);
            	leftColorController = leftColor.getI2cController();
            	leftColorCallback =  leftColorController.getI2cPortReadyCallback(leftColor.getPort());
            
            	rightColor  = myOpMode.hardwareMap.get(ModernRoboticsI2cColorSensor.class, "right color");
            	rightColor.setI2cAddress(rightColorAddress);
            	rightColorController = rightColor.getI2cController();
            	rightColorCallback =  rightColorController.getI2cPortReadyCallback(rightColor.getPort());
            
            	colorSensorsDisabled = false;
            }
            Code:
            /***
            *   enable color sensors by re-registering callbacks
            */
            public void colorEnable() {
            	if(colorSensorsDisabled) {
            	    if (leftColorCallback != null)
            		leftColorController.registerForI2cPortReadyCallback(leftColorCallback, leftColor.getPort());
            	    if (rightColorCallback != null)
            		rightColorController.registerForI2cPortReadyCallback(rightColorCallback, rightColor.getPort());
            	}
            	colorSensorsDisabled = false;
            }
            Code:
            /***
            *   disable color sensors by de-registering callbacks
            */
            public void colorDisable()
            {
            	if (!colorSensorsDisabled) {
            	    leftColorController.deregisterForPortReadyCallback(leftColor.getPort());
            	    rightColorController.deregisterForPortReadyCallback(rightColor.getPort());
            	}
            	colorSensorsDisabled = true;
            }
            Originally posted by Philbot View Post
            Each transaction with a CORE device (or in the case of the DIM, each I2C device) is performed as a transaction which has a definate fixed overhead associated with it.
            It's this overhead (which appears to be largely independant of the USB or I2C data rate) is the main contributing factor to the overall transaction rates.
            ie: it wouldn't matter if the volume of data in a packet was largely increased or decreased, the cycle times would be very similar.
            @Philbot, Thank you for your insightfull info.

            Are all / majority of the transactions initiated by the Teamcode or do the transactions between the RC and controllers occur without ever being asked by Teamcode?

            Way back at the beginning of ResQ season, when there was a notion of hardware cycle, I remember seeing some explanation like this: "the sdk reads the data from all the controllers into the memory, and it writes all the new values fromt the memory to the controllers. These two operations together constitute one hardware cycle."
            While the notion of hardware cycle is gone, I am wondering if the SDK is still reading data without being asked by the Teamcode. If so, is there a way avoid it?

            Comment


            • #81
              Sorry for the unnecessarily long message above. Somehow "Reply with Quote" included more of the previous posts than I intended.

              Comment


              • #82
                Originally posted by rbFTC View Post
                Sorry for the unnecessarily long message above. Somehow "Reply with Quote" included more of the previous posts than I intended.
                Caveat: I'm generalizing here...

                Many device read transactions are currently being initiated by the SDK in the background. The idea is to have as "fresh" data as possible available without having to "block" while waiting for a reply from each device.
                This includes motor encoder values, and most sensors. So, the system is polling attached devices in a separate thread.

                In the "old days" most of these same transactions were being done between loop() calls. So if you were using opMode style you would get fresh data each loop().
                AND (this is important) any commands you issued were done in the same period between loop calls, so commands and data ran in lock-step.
                At the start of each loop() call, you would know that your most recent commands had run, and presumably the data was correct in response to these commands.

                But.. If you were using linearOpMode, your thread would be running concurrently with the loop() cycles, so after issuing a command (like a mode change), you were unsure when it had actually been completed.
                This is what prompted the waitOneHardwareCycle() style calls. They didn't tell the SDK when to issue the transactions, they just held up your processing until they had been performed on their regular loop() schedule.

                Back to today.......

                Since the current system is ALSO polling in the background, it manages the synchronization of reads and writes by ensuring that commands (writes) are completed before returning to the user's code.
                So, writes BLOCK, but reads don't.

                This means that when you reset an encoder, when your motor.setMode(STOP_AND_RESET_ENCODER) call returns, the encoders are in fact ... reset...
                If you were to read them immediately, they would show zero.

                Also calls to things like resetZAxisIntegrator() actually tell the gyro to reset it's value and then clear out the gyro buffer and wait for a fresh integrated x value, before returning.

                This may not be the most "efficient" way to implement your specific control strategy, but it is designed to produce the most consistent and fail-safe behavior.

                Phil.

                Comment


                • #83
                  "Since the current system is ALSO polling in the background, it manages the synchronization of reads and writes by ensuring that commands (writes) are completed before returning to the user's code.
                  So, writes BLOCK, but reads don't."

                  Did anything change for regular/iterative Opmode between the "old days" and "new?" Or is it still correct to think about a fresh data being available at start of loop() and commands not being passed until the end of loop()?

                  And I presume "fresh" still has the usual delay-time caveats discussed in this thread and elsewhere, right?

                  - Z

                  Comment


                  • #84
                    Originally posted by 5294-jjkd View Post
                    Note that the USB bus is not likely the most significant limiting factor for I2C sensors. The I2C 'bus' in the CDIM is run at 100 kbits/sec (I2C 'standard mode') per the MR specs for the device. So, that data rate must be shared across all the I2C sensors attached to the CDIM. As you say, once you hit the USB bus, much more bandwidth is available, so why everything else seems slow is not obvious to me.
                    Philbot already covered this pretty well in his response.

                    But just to add, USB is designed for bulk data transfers. So once the transaction is set up, you can indeed transfer data very fast, but there's about a 10ms setup penalty for each transaction. This time is spent largely in the FTDI android usb drivers. You can see this if you profile with traceview.

                    Comment


                    • #85
                      Originally posted by skatefriday View Post
                      Philbot already covered this pretty well in his response.

                      But just to add, USB is designed for bulk data transfers. So once the transaction is set up, you can indeed transfer data very fast, but there's about a 10ms setup penalty for each transaction. This time is spent largely in the FTDI android usb drivers. You can see this if you profile with traceview.
                      IMHO, using USB as an interconnect between components is a mistake. I rather see all components connect via CAN bus (or some high speed, low latency, noise/static immune bus) and a CAN controller will communicate to the phone via USB. So all data will be bundled in one big transaction to the phone per iteration (i.e. loop).

                      Comment


                      • #86
                        Originally posted by skatefriday View Post
                        Philbot already covered this pretty well in his response.

                        But just to add, USB is designed for bulk data transfers. So once the transaction is set up, you can indeed transfer data very fast, but there's about a 10ms setup penalty for each transaction. This time is spent largely in the FTDI android usb drivers. You can see this if you profile with traceview.
                        10ms per transaction??? Does this occur with every read? If I am interpreting Philbot correctly, reads are occurring in the back ground (at what rate?). So when we ask for currentPosition, we are getting a value from a cache. Are the reads and writes handled separately, or does it switch back and forth between read and write on a single thread/logical connection?
                        Do we pay the 10ms+ penalty only on write requests? The big problem that we are seeing is that it frequently takes ~40ms for control to return to our code when we set power on both motors on a controller (lMotor.setPower(x); rMotor.setPower(x). This is a really long time for attempting any detailed control. It seems like we could easily chop this time in half if the interface exposed a means to set both powers with a single write.
                        As a side note, we do think that we were able to get a little improvement from optimizing how we call getCurrentPosition and setPower. Previously, we had these calls scattered through our code. We would call them each time we needed encoder counts for control (potentially multiple times), telemetry, dbglog, data logging, etc. We cleaned things up so that we now try to make a single getCurrentPosition call on entry to a "frame", and a single setPower call after all control calculations have been made in the frame. Of course this is one for each motor. The time between changes in encoder values improved when we did this.

                        Comment


                        • #87
                          Originally posted by FTC7253 View Post
                          As a side note, we do think that we were able to get a little improvement from optimizing how we call getCurrentPosition and setPower. Previously, we had these calls scattered through our code. We would call them each time we needed encoder counts for control (potentially multiple times), telemetry, dbglog, data logging, etc. We cleaned things up so that we now try to make a single getCurrentPosition call on entry to a "frame", and a single setPower call after all control calculations have been made in the frame. Of course this is one for each motor. The time between changes in encoder values improved when we did this.
                          Yes, it is very important not to scatter calls to get sensor values everywhere because the unnecessary calls may increase bus transactions for the sensors. We learned that from the CAN bus in FRC. That's why for the TrcDriveBase class in our library, it has a "Pre-Task" (i.e. a periodic task before the main periodic loop is called) that reads all the encoders and gyro for the DriveBase, calculates the robot's X, Y and heading values and store them. If the main code needs to know the robot's position, it calls the DriveBase to get the stored X, Y and heading from DriveBase. You can get these values multiple times without incurring any calls to the sensors at all. In general, our cooperative multi-tasking scheduler divide a "time slice" into three sections: "PreTask", "MainTask" and "PostTask". Pre-Tasks are called to generally acquire sensor data. Main-Tasks are called so it can examine the sensor values and make some decisions on what to do. Post-Tasks take the decisions and produce actions (i.e. setting motors and actuators). Having said that, we are still not doing good enough job to prevent sensor calls scattering. We certainly did a nice job in TrcDriveBase just because this is a standard subsystem in the library. When students writing code for other subsystems, they still tend to read sensors whenever they want. There is nothing to prevent them from doing so. It would be nice if the FTC SDK enforces this discipline by encapsulating all sensor calls. FTC SDK is already doing some of these by caching some sensor data (e.g. I2C sensors) but it should just do a periodic "Pre-Task" that gathers all sensor data in a "single transaction" per sensor per loop so no matter how many times "getSensorData" is called, it doesn't create extra bus transactions. Unfortunately, having USB as an interconnect doesn't help because of the bus transaction overhead. But enforcing one transaction per sensor per loop will certainly help minimizing the number of USB transactions to a constant no matter how many sensor calls you made.

                          Comment


                          • #88
                            Originally posted by mikets View Post
                            Yes, it is very important not to scatter calls to get sensor values everywhere because the unnecessary calls may increase bus transactions for the sensors. We learned that from the CAN bus in FRC. That's why for the TrcDriveBase class in our library, it has a "Pre-Task" (i.e. a periodic task before the main periodic loop is called) that reads all the encoders and gyro for the DriveBase, calculates the robot's X, Y and heading values and store them. If the main code needs to know the robot's position, it calls the DriveBase to get the stored X, Y and heading from DriveBase. You can get these values multiple times without incurring any calls to the sensors at all. In general, our cooperative multi-tasking scheduler divide a "time slice" into three sections: "PreTask", "MainTask" and "PostTask". Pre-Tasks are called to generally acquire sensor data. Main-Tasks are called so it can examine the sensor values and make some decisions on what to do. Post-Tasks take the decisions and produce actions (i.e. setting motors and actuators). Having said that, we are still not doing good enough job to prevent sensor calls scattering. We certainly did a nice job in TrcDriveBase just because this is a standard subsystem in the library. When students writing code for other subsystems, they still tend to read sensors whenever they want. There is nothing to prevent them from doing so. It would be nice if the FTC SDK enforces this discipline by encapsulating all sensor calls. FTC SDK is already doing some of these by caching some sensor data (e.g. I2C sensors) but it should just do a periodic "Pre-Task" that gathers all sensor data in a "single transaction" per sensor per loop so no matter how many times "getSensorData" is called, it doesn't create extra bus transactions. Unfortunately, having USB as an interconnect doesn't help because of the bus transaction overhead. But enforcing one transaction per sensor per loop will certainly help minimizing the number of USB transactions to a constant no matter how many sensor calls you made.
                            BTW, the old RobotC has this concept too, not necessarily for all sensors but for the gamepad and motors. No matter how many times you "setMotorPower", you are just setting a value in an array. They have a periodic task that takes the motor power array and programs the motor controllers just once per loop. Nobody else has real access to the motor controllers.

                            Comment


                            • #89
                              Originally posted by mikets View Post
                              FTC SDK is already doing some of these by caching some sensor data (e.g. I2C sensors) but it should just do a periodic "Pre-Task" that gathers all sensor data in a "single transaction" per sensor per loop so no matter how many times "getSensorData" is called, it doesn't create extra bus transactions. Unfortunately, having USB as an interconnect doesn't help because of the bus transaction overhead. But enforcing one transaction per sensor per loop will certainly help minimizing the number of USB transactions to a constant no matter how many sensor calls you made.
                              Hmm, that got me thinking ... Why wait for FTC SDK to fix this? We could do this in the library ourselves. Since all the sensors we care about already have wrappers, we could put in code in the wrappers to make sure if we access the sensors multiple times in the same loop, the cached value is returned instead of calling hardware. So it shall be done ... And it is now done. Just did a pass on the library and fixed all sensors with this optimization.

                              Comment


                              • #90
                                Originally posted by mikets View Post
                                IMHO, using USB as an interconnect between components is a mistake. I rather see all components connect via CAN bus (or some high speed, low latency, noise/static immune bus) and a CAN controller will communicate to the phone via USB. So all data will be bundled in one big transaction to the phone per iteration (i.e. loop).
                                As long as you still have USB back to the robot controller it's all moot. What you are really asking for is CAN between components, back to an embedded controller, and then some protocol to a driver station. And then you have a RoboRIO for FTC.

                                Comment

                                Working...
                                X