Demo: Object Classification

Description

A commonly utilized robot manufacturing process is in operation inspections. The robot uses a camera on the part it is inspecting.

 

Machine learning determines whether the part meets quality specifications. The camera is used to inspect parts for a single feature following a data capture and training.

 

This demo shows how to automate a quality inspection application.

 

The objective of this demo is to show a two-state classifier. There are only two possible outcomes of this recognition.

Step by step
  1. Move the robot in to position from which the object recognition would eventually be performed.

  2. Similar to the camera calibration, on the Compute module start the ROS environment run_ark.sh.

  3. On the Compute module use the GUI.

    The blue bounding box (configurable in ros/config/config.yaml) is the region of interest.

  4. Start recording images for the state_0

  5. Move the object you want to recognize within the bounding box of the camera view.

  6. Take at least 20-30 images varying the object position and if desired, the object orientation.

    Images are stored in ros/data/datasets/classification_active/raw

  7. Record images of a second object, as the state_1

  8. Click the Train model, this process can take several minutes.

You can follow the progress in the Terminal window on the compute module.

Training is complete when the message "onnx conversion completed" appears in Terminal and the model is written as a file in ros/data/models/classification_active

To use the model
  1. Load the model.

  2. In the UI, tap the Load model.

    This loads the active model from ros/data/models/calssification_active

  3. Tap Classify to test.

  4. The Terminal window outputs the class (either state_0 or state_1) and probability (recognition certainty).

Example of using recognition results

A specific robot position stored in this program. Before executing this program check that robot can freely move to each of the stored waypoints and poses no risks.

  1. Set program speed to 10%.
  2. Select wp_recognize and tap Move Here.
  3. Verify that there are no obstructions.
  4. Repeat these steps for state_0 and state_1.
  5. If necessary, Freedrive robot to a new position and save it as detect_wp.

Included withAI AcceleratorSDK you can find example of a robot program using the recognition results.

  1. On the robot open ark_example_classify program, installed during setup.

  2. Run the run_ark.sh on the compute module.

  3. Load the classification model.

  4. The robot program uses three waypoints wp_recognize, state_0 and state_1.

    Recognize waypoint is where camera looks at the object for recognition. Then we conditionally move robot to either state_0 waypoint or state_1 waypoint depending on the recognition results.

The value of variable detected_class is assigned by function "ark_classification_retrieve(). You can see details of ROS communication in URScript code.