Skip to main content

Delivering on e-commerce

In recent years a growing number of companies have begun to use vision technology in automated warehouses and distribution centres, with devices like autonomous guided vehicles and techniques such as optical character recognition on the rise. Where are the key applications of vision technology, and what innovations can be expected in the coming years?

One company involved in the automated warehouse sector is Cognex, which makes a wide range of guidance, inspection, measurement and identification technology. As Paul Eyre, director of sales for logistics and ID mobile at Cognex Europe, explained, the company’s products begin by acquiring an image of a customer’s product or part. Software tools are then applied to analyse the image and provide an output, which can be anything from a data output with co-ordinates to a robot, a data string from a barcode to a PLC, an electrical output to a reject mechanism, and many other types of outputs using standard industrial communication protocols.

Eyre pointed out that many post-manufacturing, warehousing and distribution processes are still quite labour-intensive, but admits the rapid evolution of the e-commerce industry has changed consumer behaviour.

‘Consumers require products delivered to a location of their choice in a short timeframe, and the ability to service this demand is paramount. This change in consumer behaviour, in addition to the difficulty in finding and retaining manual labour, has meant that many companies have begun to invest in automation, which is allowing companies to realise major gains in efficiency and throughput, and to better service the needs of a more demanding customer base,’ he said.

According to Eyre, Cognex is playing a major part in the drive to automate warehousing and distribution processes. He said Cognex’s image-based identification technology is helping warehouses and distribution centres improve their ability to track products from goods-in – to shipping – even right to the customer door. Eyre also revealed industry leaders like Edeka, in the grocery industry, and Zalando, in the e-commerce sector, use Cognex’s technology to identify products, even when labels are badly damaged or under plastic film, a task he feels laser-based technology struggles with.

‘The implementation of Cognex’s image-based Dataman products over traditional laser-based products allowed Edeka to increase read rates of incoming pallets by up to 8 per cent, reducing pallet re-work and thereby providing a major efficiency increase, while also ensuring all supplier labels are compliant with the GS1 standard,’ he said.

‘Zalando’s retrofit of older laser-based technology with the latest image-based technology resulted in an increase in read rates of three per cent on their sorting system at their Erfurt, Germany, facility,’ Eyre added.

In addition, Ocado, a major online grocery company based in the UK, uses Cognex’s technology to maximise throughput at its warehouses in semi-automated processes.

Beyond the simple identification of products, Eyre said that Cognex is also using other image-based technologies, such as 2D and 3D machine vision, as well as deep learning technology, to carry out a number of distinct tasks, ranging from calculating the correct dimensions of products, checking for damage to boxes and cartons, and determining if boxes or totes are full to capacity, to assessing barcode quality, automating robotic picking and classifying products into categories for sorting.

As well as enabling companies to automate processes and tasks to a level where less operator intervention is required, if at all, and providing excellent read rates on barcodes and other symbols, Eyre highlighted a number of additional benefits of image-based technology of the type supplied by Cognex. To begin with, he explained that camera-based technology means that access to the image is available when a product fails a specific test. Image feedback also provides data about the process, which can be used to drive process improvements. He also pointed out that Cognex smart cameras are solid-state devices with no moving parts, and contain their own on-board processors, meaning no PCs are required.

‘The future will see more pixels and more processing power within our smart cameras, and the introduction of 3D and deep-learning platforms to address a myriad of applications that are currently labour intensive,’ he added.

Slave to the algorithm

Another vision technology commonly used in the logistics sector is optical character recognition (OCR), a process by which software converts human-readable text into characters that can be stored, interpreted, and segmented by machines. One company focused on the development of such technology is Omron, which offers advanced optical character recognition functionality in its machine vision product line.

‘Allowing user control of customisable parameters, [Omron’s] IntelliText OCR can be adjusted to recognise characters regardless of marking or printing method, including low contrast text on poor backgrounds,’ said Nico Hooiveld, EMEA market manager at Omron Europe.

According to Hooiveld, the tool can also read difficult characters on parts and products in automated identification, tracking and inspection applications – and is capable of reading text printed by various methods, including inkjet, drop on demand (DOD) and direct part marking. Moreover, he observed that an integrated multi-neural network allows the tool to train on character variations and store these in a font library for increased OCR speed as the library grows. An advanced character segmentation capability also allows the software to parse characters, regardless of their uniformity or the precision of the print region, a function that Hooiveld pointed out is useful when print consistency, label placement, or text location is subject to variation. To aid segmentation in difficult reading environments, IntelliText OCR also offers image pre-processing, enabling the software to run filters on an image taken by a machine vision camera to produce what Hooiveld described as the cleanest image possible for OCR.

‘Unique to IntelliText OCR is the software’s image binarisation process, which converts the greyscale image taken by the camera to a binary image. The binary image allows the user to see the image features that the software is able to recognise as characters, and gives users the ability to set tolerances that determine how much of the image is in view. This allows the most difficult text to be adjusted for and read with ease,’ he said.

In automated warehouse and logistics applications, Hooiveld observed that 2D barcodes are predominantly used, largely thanks to their more reliable performance. In terms of logistics and supply chain applications more broadly, he also said that the Omron OCR technology has been used successfully at Del Monte, where the company implemented a vision inspection system to read OCR text and barcodes on canned fruits and vegetables – to ensure that the product code printed on the top of the can matched both the contents inside the can and the label. Looking at automated warehousing in a broad sense, Hooiveld also singled out human-machine collaboration, aided by imaging and vision technology, as a major trend – Omron has recently launched its TM series of collaborative robots for manufacturing environments, where humans and machines work together.

The TM series will automate applications such as picking, packing and screw-driving. Omron has also released a mobile-compatible model, which integrates into its LD series autonomous mobile robot. This means users can automate more complex tasks, such as pick-and-place onto a tray or container.

Self-driving vehicles

Another company heavily engaged in the vision and automation sector is Vecna Robotics, which manufactures a fleet of self-driving vehicles, including tuggers, pallet jacks and robotic conveyors with payloads ranging from 20kg up to 4,500kg. A key feature of each device is the onboard Vecna Autonomy Stack navigation technology. As Daniel Theobald, chief innovation officer at Vecna Robotics, explained, the company’s robots employ a multi-modal approach to sensing, and use data from stereovision, time-of-flight cameras, structured light cameras, lidar, radar, ultrasonic, infrared, GPS, ultra-wide band, Bluetooth, Wi-Fi, accelerometers, gyros, compass, pressure sensors and current sensors.

‘There is no one perfect sensor, and choosing the appropriate sensor suite depends on the specific operating environment. Safety is always the first priority and this technology analyses billions of data points per second, enabling robots to react in real time to their surroundings,’ he said.

An autonomous tugger with a load of packages rounds a corner at a FedEx distribution centre. Credit: Vecna Robotics

In the warehouse environment, Theobald said that Vecna robots can also re-route around obstacles, identify static and moving objects, read barcodes to identify specific items, and adjust speed based on environmental conditions. Data from the various sensors employed in Vecna’s devices are calibrated together and then filtered to make sure that all the sensor data is consistent with the believed current location of the robot. If there are outliers or sensor data that conflicts, Theobald explained that the system safely stops and asks for help if the robot cannot resolve the discrepancy on its own. At present, Theobald said that Vecna’s self-driving vehicles are active in a wide array of global distribution centres and regional third party logistic sites, as well as manufacturing facilities and warehouses.

‘Our robots are one of the only self-driving vehicles on the market to identify and safely deal with moving objects, such as people or other equipment, and locate and pick up materials outside of pre-determined locations. For instance, if a human drops off a pallet a few inches from its expected position, most AGVs [automated guided vehicles] would not be able to retrieve it,’ he remarked.

‘Our implementations within FedEx, for example, have led to a significant reduction in damage to facilities, equipment and materials, and increased safety for staff,’ Theobald added.

For Theobald, the key benefits of robotic vision are safety and flexibility – and, among other things, he is keen to stress that devices equipped with such technology are able to confidently manoeuvre through dynamic environments and be agile as demand fluctuates.

‘With the right vision technology, vehicles can swiftly adapt to facility changes, be added to new operations, or change work zones without retraining, remapping, or installing reflectors or guiding wires. This allows companies to scale automation, as their needs evolve and business grows,’ he said.

In the coming years, Theobald predicts that vision and machine learning will continue to be the most crucial technologies for logistics automation, particularly because he believes they are the key to remaining competitive as customer expectations rise and labour shortages continue to put a strain on the competitiveness of businesses.

‘The goal is to supercharge human productivity, allowing humans to get more done while having safer, more enjoyable work. Vision and machine learning provide self-driving vehicles with the ability to operate safely, to self-correct, and most importantly to learn – generating a continuous cycle of improvement,’ he said.

‘These technologies allow organisations to harness the fundamental strengths of robots: tireless precision in heavy-duty work. Combining these strengths with human expertise and real-time intelligence will create flexible processes that effectively tackle changing fulfilment requirements,’ he concluded.



Topics

Read more about:

Logistics, Robotics

Media Partners