Announcement

Collapse
No announcement yet.

Sample op mode and video on navigating to a SKYSTONE during the autonomous period

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sample op mode and video on navigating to a SKYSTONE during the autonomous period

    I have created a sample op mode and a video that uses Tensor Flow to navigate to a SKYSTONE.

    You can download the sample op mode by clicking here: https://tinyurl.com/SeekSkystoneBLK
    Here’s a link to a PDF file showing the blocks of this Op Mode that you can display or print:
    https://tinyurl.com/SeekSkystonePrint

  • #2
    Good Evening Bruce,
    When I try to upload the program into my blocks program, I am receiving this error: Could not generate code for blocks. Type error: Cannot read properly 'isConnected'of null. Not sure if it is my block program. Please advise. thank you for all your work on this.

    Comment


    • #3
      Have you installed recent versions of the Driver Station and Robot Controller apps? There are new versions available on Google Play Store. The RC app is labeled as "(Beta)" and Version 5.2.

      Comment


      • #4
        Thanks, that helped. Now I just need to figure out why it just backs up and does not recognize the skystone.

        Comment


        • #5
          One thing to double check is that your RC phone is in Auto-rotate mode if it is mounted horizontally on your robot.

          Comment


          • #6
            Love the videos, but -ugh- the tinyurl.com links have timed out. Can you put in real URLs that we can copy-paste?

            Comment


            • #7
              https://tinyurl.com/SeekSkystoneBLK
              SeekSkyStone.BLK in Google Drive as https://drive.google.com/open?id=1k1...rQUcPEDpUbLxkT


              https://tinyurl.com/SeekSkystonePrint
              SeekSkyStone.PDF in Google Drive as https://drive.google.com/open?id=1DF...3AYGRdgkMrjpPW

              Comment


              • #8
                HI Folks,

                Some folks mentioned their RC's are identifying multiple elements as a single Skystone. If you are experiencing this issue, one thing that you can do to try and resolve it is to decrease the threshold confidence value for the TensorFlow object detection model. I believe that in the example op modes the default value is set to 0.8 (or 80% confidence level as its minimum). However, decreasing it to a lower value (i tested 60%) seems to increase the likelihood that TensorFlow will also identify the adjacent game elements and it will be less likely that it will "lump" multiple elements into a single giant element.

                Details can be found in the following Reddit post

                https://www.reddit.com/r/FTC/comment..._makes_it_way/

                Comment


                • #9
                  Hi Tom,

                  Thanks for the info, link, and video.

                  The video shows stone/skystone detection with the camera quite close to the stones. Could you run a test and do some refinement with the camera about 1-2 floor tiles away from the stones where most phones will be at the start of autonomous? Last season, many robots made autonomous sampling field decisions based on what they could see from their starting location and I imagine many robots might like to do something similar again this season. It would be great if TensorFlow could correctly identify and separate stones and skystones even when 3-4 of the stones in the quarry are all visible to the camera in a horizontal row.

                  I don't know how possible this is, but it would be great if the TensorFlow model could be trained with some of the training images taken with the stones/skystones as they appear in their row in the quarry from varying distances (and not just as "loose" separated stones), as that would probably dramatically improve the model's ability to resolve different stones when they're all in a contiguous row.

                  Thanks as always for your efforts and consideration.

                  Comment


                  • #10
                    Yeah, hurray, the drive.google.com links works, in time for today practice . Thanks for that!

                    Comment


                    • #11
                      Originally posted by Cheer4FTC View Post
                      Hi Tom,

                      Thanks for the info, link, and video.

                      The video shows stone/skystone detection with the camera quite close to the stones. Could you run a test and do some refinement with the camera about 1-2 floor tiles away from the stones where most phones will be at the start of autonomous? Last season, many robots made autonomous sampling field decisions based on what they could see from their starting location and I imagine many robots might like to do something similar again this season. It would be great if TensorFlow could correctly identify and separate stones and skystones even when 3-4 of the stones in the quarry are all visible to the camera in a horizontal row.

                      I don't know how possible this is, but it would be great if the TensorFlow model could be trained with some of the training images taken with the stones/skystones as they appear in their row in the quarry from varying distances (and not just as "loose" separated stones), as that would probably dramatically improve the model's ability to resolve different stones when they're all in a contiguous row.

                      Thanks as always for your efforts and consideration.
                      Hi Cheer4FTC,

                      Thanks for the post. Unfortunately, I don't know if we have the time during the season to do any additional model training. Running the actual training is not difficult, since we used the Google TPU clusters and they work really fast to do the training steps. However, the time consuming part is generating and selecting the training images. We (an intern and I) spent several weeks generating different training data sets, and developing models based on what we thought were ideal training sets. However, in some cases, adding additional types of images, in an attempt to increase the detection accuracy, resulted in worse results. Increasing the number of training steps did not help for these data sets - the error did not converge to an acceptable value.

                      It was an iterative process for us to get a more reliable inference model, and it took us a non-trivial amount of time to generate, review, and process the training data sets.

                      If teams are interested in generating their own updated models, the steps are outlined in this tutorial. The most difficult part is installing the necessary components to run TensorFlow. However, I did some recent testing and it seems like this installation process is much easier now - many of the issues with broken dependencies seems to be corrected.

                      Originally when I did the training I did everything using a Linux machine (Ubuntu 18.05 running as a Windows 10 subsystem). I did this because at the time it was more expedient to do everything in a linux console. However, recently I installed Google's TensorFlow and TensorFlow Object Detection API on my Windows laptop and got it to run on my laptop. I haven't tested the Windows build environment with the Google Cloud services integration, but I think it could be done relatively easily.

                      Google's ftc-object-detection tool can be used to help generate the training and validation data sets.

                      For this season's game, I believe that with the default TensorFlow model and also the Vuforia software, the robot will need to be a little bit closer to the Stones/Skystones to distinguish them more accurately. This might have to factor in to your plans when developing a strategy for autonomous.

                      I hope this helps. Please let me know if you have any additional questions.

                      Tom

                      Comment


                      • #12
                        Hi Tom,

                        Could you help to explain how to do the training in Windows build environment with Google Cloud services integration?

                        Comment

                        Working...
                        X