Announcement

Collapse
No announcement yet.

Vuforia Target Detection

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jrasor2017
    replied
    I tried the 30 minute TPU tutorial https://medium.com/tensorflow/traini...s-b78971cf1193, on Google Cloud Platform. No luck so far. There is show-stopper error in a switch for the detection job. Error fixed, job runs, but fails after 5 minutes with "Please provide a TPU Name to connect to". Evaluation job runs, but fails after 5 minutes with "Expected string but found: 'input_path."
    Tom Eng is helping me through tutorial https://github.com/google/ftc-object...aster/training. See isssues #14 and #15 over there. He suggests invoke model_main.py. That is not mentioned anywhere in his google tutorial. model_main.py is the local analog of ml_engine in the 30 minute tutorial. That tutorial has it as a switch in the evaluation job.
    I did a local adaptation of the 30 minute tutorial, attempting to detect a single object: a poker chip. In 3 hours it produced a detector model somewhat better than a wild guess: it sees the poker chip every time on a variety of backgrounds, but sees a lot of other things as poker chips with less confidence. Longer training and tighter minimum confidence should improve this partial success.

    Leave a comment:


  • jrasor2017
    replied
    I'm gonna do the 30 minute TPU tutorial https://medium.com/tensorflow/traini...s-b78971cf1193 unmodified, on Google Cloud Platform.

    By the way, Tensorflow for Poets https://codelabs.developers.google.c...low-for-poets/ works very well on the poker chip and thumb drives, but the SkyStone code can't use the resulting detect.tflite.

    Leave a comment:


  • jrasor2017
    replied
    Summarizing Tom's suggestions:

    follow the [30 minute TPU] tutorial https://medium.com/tensorflow/traini...s-b78971cf1193 and modify the commands to run locally
    use tensorboard
    adjust the confidence threshold to maybe 0.4
    invoke model_main.py -- That's the big one.
    specify the location of the training records in the pipeline.config file

    Did I miss anything?

    Leave a comment:


  • Tom Eng
    replied
    Originally posted by jrasor2017 View Post
    Can we just get over this: How does one tell python or bazel to actually use the 265 poker chip records generated by python3 convert_labels_to_records.py train_data -n 8 --eval for training?
    jrasor2017 - this is explained in the TPU tutorial, but you specify the location of the training records in the pipeline.config file.

    Leave a comment:


  • Tom Eng
    replied
    Also, the README file on the ftc-object-detection portion of the repository explains how to use the tools that are included in that repo to generate the training records more efficiently. It references the Tensorflow Object Detection API documentation for the model training information.

    Leave a comment:


  • Tom Eng
    replied
    Originally posted by jrasor2017 View Post
    Thanks, Tom!

    I will try the 30 minute TPU tutorial locally as you suggested, and report. Last time I tried it, it failed, expecting to find a non-existing directory in my local data bucket.

    Two quick comments.

    1. You, and others who have graciously lent assistance on this issue, reference that 30 minute TPU tutorial every time. Perhaps the team should dump https://github.com/google/ftc-object...ning/README.md. Apparently, nobody on your end has any confidence in it.
    jrasor2017 - I disagree. The README file explains where to find the object detection documentation including the TPU tutorial. It was written last year so some of the items that it references (such as the tensorflow software) might have changed.

    Originally posted by jrasor2017 View Post

    2. How did the team actually make the model https://github.com/FIRST-Tech-Challe...rc/main/assets/RoverRuckus.tflite? This is the one distributed since ftc_app v 4.0. It could not have been made with https://github.com/google/ftc-object...aster/training; that had show-stopper typos as published in version 4.0, only recently fixed. If the 30 minute cloud tutorial was used, that should replace the README.md in https://github.com/google/ftc-object...aster/training.
    That "typo" is not incorrect if you were to use an older version of the Tensorflow software. The RoverRuckus training was done last summer by a Google employee in Mountain View.

    If you do the TPU tutorial (and you can either modify it to run it locally on your desktop... you need to invoke the model_main.py script to do this) it should work with the current version of the TensorFlow software. I used the TPU last month to develop a custom inference model both locally (on a workstation) and using a TPU cluster (orders of magnitude faster).

    Leave a comment:


  • jrasor2017
    replied
    Can we just get over this: How does one tell python or bazel to actually use the 265 poker chip records generated by python3 convert_labels_to_records.py train_data -n 8 --eval for training?

    Leave a comment:


  • jrasor2017
    replied

    Thanks, Tom!

    I will try the 30 minute TPU tutorial locally as you suggested, and report. Last time I tried it, it failed, expecting to find a non-existing directory in my local data bucket.

    Two quick comments.

    1. You, and others who have graciously lent assistance on this issue, reference that 30 minute TPU tutorial every time. Perhaps the team should dump https://github.com/google/ftc-object...ning/README.md. Apparently, nobody on your end has any confidence in it.
    2. How did the team actually make the model https://github.com/FIRST-Tech-Challe...rc/main/assets/RoverRuckus.tflite? This is the one distributed since ftc_app v 4.0. It could not have been made with https://github.com/google/ftc-object...aster/training; that had show-stopper typos as published in version 4.0, only recently fixed. If the 30 minute cloud tutorial was used, that should replace the README.md in https://github.com/google/ftc-object...aster/training.

    Leave a comment:


  • Tom Eng
    replied
    Originally posted by jrasor2017 View Post
    .. is it possible that the model is detecting the new objects (poker chips and thumb drives) but at a low confidence level? I did not try that, nor will I. I do not want to detect Rover Ruckus stuff in our SkyStone code, at any confidence level. Both the supplied model and the ones I train using the tutorials do that at high confidence level, and are utterly blind to other objects. I cannot get the recommended scripts to do anything meaningful with my videos, trackers, labels or records.
    jrasor,

    Once you have created a new custom inference graph and exported to the .tflite format, it's worthwhile to do this test. I'm just suggesting that when you initialize the Tensorflow library, you set the detection threshold to a lower value (in the FTC SDK software, the default threshold last year was 0.4). If you set the detection threshold too high, the tensorflow object detection software might not indicate any detections, even though it does recognize your poker chip and thumb drive objects in its field of view.

    By turning down the detection threshold, you are being less selective and the tensorflow library will be more likely to indicate that it recognizes an object.

    As I was developing a custom inference model, I initially had my confidence threshold set to 0.4 when I did my testing of the initial versions of the custom model. As the model's accuracy increased, I would then increase the confidence threshold to a higher value, after I confirmed that the model was working.

    Tom

    Leave a comment:


  • Tom Eng
    replied
    Hi jrasor2017

    I believe the issue that you are having is that you are exporting the old (gold and silver minerals) inference model without first running the training process to create the new custom inference graph. Once you've run this training process (and it will take a long time if you are using a workstation, even if you have a CUDA-enabled GPU and run the tensorflow-GPU code) you can then run the export_tflite... python script to export it to a format that can then be converted to a .tflite file.

    I provided details in my response for the issue that you opened on the ftc-object-detection repository.

    Tom

    Leave a comment:


  • jrasor2017
    replied
    .. is it possible that the model is detecting the new objects (poker chips and thumb drives) but at a low confidence level? I did not try that, nor will I. I do not want to detect Rover Ruckus stuff in our SkyStone code, at any confidence level. Both the supplied model and the ones I train using the tutorials do that at high confidence level, and are utterly blind to other objects. I cannot get the recommended scripts to do anything meaningful with my videos, trackers, labels or records.

    Leave a comment:


  • jrasor2017
    replied
    I'm a happy camper if I can just get over this: How does one tell python or bazel to use the 265 records generated by python3 convert_labels_to_records.py train_data -n 8 --eval ?

    Leave a comment:


  • jrasor2017
    replied
    .. is it possible that the model is detecting the new objects (poker chips and thumb drives) but at a low confidence level? I did not try that, nor will I. I don

    Leave a comment:


  • jrasor2017
    replied
    ... did you try adjusting the confidence threshold? No. The models I generated detect Gold and Silver very well. But I cannot get them trained on anything else, no matter what records I generate. And yes, they reject poker chips and thumb drives as neither Gold nor Silver.

    Leave a comment:


  • jrasor2017
    replied
    how many training steps did you run your model? I cannot tell from https://github.com/google/ftc-object...aster/training. The options for script $MODEL_RESEARCH_DIR/object_detection/export_tflite_ssd_graph.py do not obviously specify the number of training steps. Neither does the bazel invocation on that page. Is it the --mean_values=128 \ or --std_values=128 \ ?

    Leave a comment:

Working...
X