Machine vision: Integrating Eva with a smart camera

Posted by Dr Andrea Antonello on Dec 9, 2019 12:53:36 PM

Eva and Datalogic smart camera setup

Whenever people think of automating tasks to increase throughput, remove manual bottlenecks or manage spikes in order volume, the assumption is that the robot will be able to fully replicate the way a human performs said task. The reality is that humans are capable of carrying out complex manoeuvres and processing information without the need for additional equipment. Vision systems are becoming a crucial part of robotics, allowing a robot arm to ‘see’ the task that it’s meant to perform and opening up the possibility of more complex operations.

Any robotic solution needs to be integrated with its environment and this means that manufacturers need to consider how a robot will interact with other elements and technologies in their factory, and how best to present the parts to the robot - sometimes this involves a custom jig, a bowl feeder or, more commonly, a vision system. 

Integrations between smart camera systems and an industrial robot like Eva can be used for applications such as sorting or in a more complex, multi-robot solution which also includes PLC integrations, for quality assurance, testing and inspection. There is a common misconception that building out machine vision integration with a robot is a highly specialist, complex job - but it needn't be. Here's how you can integrate Eva with a Datalogic camera with just 200 lines of code!

The best way to perform machine vision with Eva is to use our REST API. This provides the capability to perform further data processing and decision making, based on the inputs given by the camera.

The application itself is one that I’ve encountered in discussions with customers: how to accurately inspect and sort parts on a production line. For manufacturers dealing with low volume high mix parts, quality control is an essential part of their processes. This demo is meant to demonstrate how easy both inspection and sorting can be to implement using a smart camera vision system and Eva. 

For this application, I’m using a Datalogic camera, model T47. The setup is quite standard: we have Eva mounted on one of our demo tables (a square of 750mm), with the camera mounted on top, at a distance of approximately 600 mm. 

On the table, and inside the field of view of the camera, I placed a tray (300x200 mm) with the three different types of objects (a capacitor, a relay and microchip) which we're going to program Eva to sort.

Tray with objects

On the camera side, the particular steps to follow are a function of the camera model and make you have available. However, the steps can be generalised as follows:

    1. Camera calibration
    2. Part teaching
    3. Set-up of the camera-API communication 
    4. Defining the robot-camera relative position

Step 1: Camera Calibration

For this pick and place application, I want the camera to provide the robot with the position and rotation of the detected object. That is, I need the object’s cartesian location (x, y)  and its planar rotation (θ) in order for the robot to approach it and precisely pick it up. Since we are using a single camera system, we are not capable of discerning the z (depth) location of the objects, which will hence have to be initialised on the same plane (all with the same, constant z).1

In order for the camera to output the information I want (x, y, θ) it is essential to transform the pixel values into absolute cartesian coordinates. This process is commonly known as calibration. Usually, camera manufacturers provide calibration pages - with dot patterns or checkerboards of a known dimensions - for the camera to automatically assess its mounting position/orientation. With the model I’m currently using, this is easily done through the native Calibration function in the VPM Datalogic software: I simply place the page with the dot pattern under the field of view, I take a snapshot and I let the software do its magic!  Step 1 is then completed. 

Tip: Don’t move the calibration page just yet, we will need it on the following page! Leave it in the same position it was when you took the calibration snapshot.Calibration setup

Step 2: Teaching

I’m using Datalogic’s native tool for object detection. I place my three parts on the tray and I take a snap: I then draw a rectangle around the area of interest and I save the patterns.

Teaching the objectsTeaching the size and shape of the objects

Step 3: Camera-API communication

Finally, I need to set-up the communication between the camera and the API. Depending on the model and make, different options could be available. In this case, I’m using a TCP/IP connection, with the camera acting as a server. On the API side, I use the following lines of code to create the communication socket:

# Connection to camera

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

server = <user_input>

port = <user_input>

sock.connect((server, port))

A ping query from the terminal helps me assess whether the socket has been successfully created. At this point, with the communication working, I need to package the data that I want to communicate to my Python code in a compact string. There are several ways of doing this, but for me this has worked the best: I created a single comma separated string concatenating the key variables of all the patterns. 

[Start, Pattern_1, X_1, Y_1, Theta_1, Match_percentage_1, Pattern_2, X_2, Y_2, Theta_2, ‘Match_percentage_2, Pattern_3, X_3, Y_3, Theta_3, Match_percentage_3, end]

Notice how I placed the substrings Start and End at the extremities of the string: this allows me to verify whether a string is corrupted or not, upon reading it on the socket. 

For each pattern recognised: Pattern_# represents the name of the pattern (in my case, I chose C, R and M for capacitor, relay and microchip, respectively), X_#, Y_# and Theta_# represent the cartesian position in [mm], and the angle of rotation in [deg] of the pattern recognised, and Match_percentage_1 is a score from 0 to 1 representing how well that pattern matches with the original snap. In my application, I will apply a 0.8 threshold on the match percentage as an acceptance criteria. 

Camera correctly recognising the objectsCamera correctly recognises the objects

Step 4: Definition of the robot-camera relative position

Once the serial string is set up, it’s time to take a look at the Python script. There’s one last piece of information needed for the robot to successfully use the camera information: we need to provide Eva with the knowledge of the camera’s position. To do this, I first need to know where the origin of camera frame is: in my case, this is the centre of the top left dot on my calibration pattern. Once I know where to go, I first backdrive Eva to the ballpark of this position (see image below). Then, in Choreograph, I save this waypoint, open the Gizmo, and double click on the head’s frame. This will make the end effector frame of reference parallel with the base reference frame, and will ease the further steps. I finally proceed with fine tuning the position with the x, y, z arrows on the Gizmo, moving as close as possible to the centre of the dot identified. 

Tip: personally, I like to use a pointed end effector to simplify this centring procedure. Or, like in the figure below, you can use the end-effector itself.

Robot at origin of calibration patternMoving the robot to the origin of the calibration pattern (0, 0)

Once I’m satisfied with this, I query the robot joint angles from the GoTo → Fill Current function in the Choreograph Dashboard. These angles are fundamental for the success of the machine vision application and are copied and pasted into the Python script: the array containing them is named joint_cal_zero in my script.

Tip: make sure you are using the correct angle units: Choreograph will output the values in [deg], while the API uses [rad].

The final step is to correctly set the pickup and hover heights for the objects. To do this, I define the following parameters (in parentheses you can find the variable names in my code)

    • Thickness of the object to be picked up (obj_heights [m], unsigned float)
    • Offset between the pickup surface and the surface on which the robot is place, i.e. if the pickup tray is raised on a platform/table (surf_height [m], signed float)
    • Length of the end effector (ee_length [m], unsigned float)
    • Hover height, which defines how high I want to hover on top of the object right before coming down to pick them up. This is important, for example, for the robot to avoid collisions with the environment and/or with the sides of the tray (hover_height  [m], unsigned float) 

At this point, we are ready to test our application. I set the camera to continuous acquisition mode, and I run the Python script. To start with, I’ll test my demo with a slow speed (for example, by setting up the default_velocity value in the toolpath metadata to 0.1). 

Tips and Tricks: 

    • When performing a pick and place application, I suggest performing the pickup through a linear trajectory instead of a standard joint space: this ensures the gripper will always arrive perpendicular to the object, thus improving grip and repeatability
    • In machine vision applications, lighting is key! Make sure you are in control of the lighting conditions in order to avoid glare, shadowing, and external conditions disturbances. I highly suggest using an illuminator.
    • For successful Inverse Kinematics (IK) computations, make sure your guess joint angles are chosen correctly: a simple way to do this is to simply backdrive the robot in the ballpark of the position, and read the corresponding angles from the GoTo → Fill Current function in the Dashboard. This will be your guess (remember to convert the [deg] values into [rad], which is the unit that the IK function expects)
    • For this application, I’m using a custom wrapper for the Inverse Kinematics solution: As my collection tray is parallel to the robot’s base, I want to approach my objects always perpendicular to the ground. This means that axis 6’s revolution axis will have to be perpendicular to the XY plane (head-down). Mathematically, the orientation quaternion will be q=[0,0,1,0]. Then, If I want to add a rotation to axis 6, without changing the head-down condition, I can simply input that angle [deg] into the wrapper function, which takes care of the annoying mathematics! Here’s the wrapper:
def solve_ik_head_down(eva, guess, theta, xyz_absolute):

   # This method solves the inverse kinematics problem for the special case of the end-effector

   # pointing downwards, perpendicular to the ground.

   # guess : is the IK guess, a 1x6 array of joint angles in [rad]

   # theta : angular rotation of axis 6 [deg]

   # xyz_absolute : cartesian position, with respect to robot's origin [m]

   pos = [xyz_absolute[0], xyz_absolute[1], xyz_absolute[2]]  # [m]

   pos_json = {'x': (pos[0]), 'y': (pos[1]), 'z': (pos[2])}  # [m]

   orient_rel = [math.cos(np.deg2rad(theta) / 2), 0, 0, math.sin(np.deg2rad(theta) / 2)]

   orient_abs = quaternion_multiply([0, 0, 1, 0], orient_rel)

   orient_json = {'w': (orient_abs[0]), 'x': (orient_abs[1]), 'y': (orient_abs[2]), 'z': (orient_abs[3])}

   # Compute IK

   result_ik = eva.calc_inverse_kinematics(guess, pos_json, orient_json)

   success_ik = result_ik['ik']['result']

   joints_ik = result_ik['ik']['joints']

   return success_ik, joints_ik

    • In order to correctly process the coordinates sent from the camera, I need to be aware of the mutual orientation between both the robot’s and camera’s frames of reference. Eva’s robot frame of reference can be found in the manual. The camera’s frame of reference depends on the model and make: however, assuming that the XY planes of Eva and the camera are parallel, we can use this code snippet to correctly rotate the camera coordinates into Eva’s frame of reference (this is nothing more than the classic 2D rotation matrix). If the two XY planes are not parallel, a 3D rotation matrix will have to be used.

# Compute relative object position in Eva's frame:

# Need to known Eva's frame rotation wrt to camera frame

# Convention: start from camera frame and rotate of ang [deg] to get to Eva's frame

ang_cam = 180  # [deg]

x_obj_rel = np.cos(np.deg2rad(ang_cam)) * x_obj_rel_cam + np.sin(np.deg2rad(ang_cam)) * y_obj_rel_cam  # [m]

y_obj_rel = -np.sin(np.deg2rad(ang_cam)) * x_obj_rel_cam + np.cos(np.deg2rad(ang_cam)) * y_obj_rel_cam  # [m]

This is what the complete application looks like: 

 

 

Machine vision & sorting demo from Automata on Vimeo.

I hope that this walkthrough has demonstrated that integrations between Eva and a vision system do not need to be complicated or even require extensive programming knowledge. If you’re interested in learning more about how we achieved this and whether a similar set-up could work for you, simply get in touch and one of our applications team will be happy to help. 

You can take a look  at the full code for this application by filling out the form below. 

 

1For more complex applications, in which the objects are stacked on top of each other, the sensor will have to be a 3D camera or a stereovision system, which is out of the scope of this application.

Topics: Inspection, Machine Vision, API, Video, Sorting, Datalogic