Every company has an inventory control process requiring physical inventory count to be conducted on a periodic schedule. In Xilinx, this responsibility falls on our Inventory Control team.

In this role, Inventory Control provides confidence to the organization in making sure physical inventory exists where it is reported. It requires personnel to travel to business sites to confirm the physical existence of these inventory. The physical count is conducted in a traditional manner using human eyes to observe the physical existence of the inventory and using pen and paper to record. Finally, a reconciliation is conducted against the system of record. Such efforts take days to complete, often with challenges since Xilinx inventory is fast moving, synonymous to a “motion picture”. Constant dreads include factors like inventory size, transaction volume, number of business locations, budget, cost, time and preparation.

Leveraging Emerging Technology

The advent of emerging technology played a huge role in “recasting” the manual process we just described. The goal is to efficiently extract label information necessary for the inventory counting process from the images. The new process involves getting business partners to take photo images of the inventory. Once uploaded to Cloud, automation takes over.

The automation consists of a pipeline comprising computer vision and text mining operations. The first step involves the usage of a deep learning model to identify, extract and segregate label images from the photos. The extracted images containing individual labels are then processed with OCR (Optical Character Recognition) technology. An algorithm developed to mine the raw text produced from the OCR process finally extracts the label attributes of interest.


Figure 1: Pre-trained CNN and OCR to automate extraction of label information

Applying CNN

A key aspect of the solution, using an object detection model to identify the labels from the photos taken, is described in more detail here. A CNN (Convolutional Neural Network) was trained to predict bounding boxes of the labels. Photos of probable Xilinx or partner labels are taken and used for the training process. Thousands of images are then generated synthetically from each image to produce a huge dataset.


Figure 2
: Thousands of synthetically generated label images for training

The TensorFlow framework was then used to train the CNN model on the generated dataset. Once training is completed, the model is used to detect labels from new images containing Xilinx or Partner labels. Using individually extracted label images significantly increases the accuracy of OCR and the extraction of the label attributes.


Figure 3
: Trained model detects and separate labels

New ways to do old things

With inventory as a “snapshot”, a static moment in time, inventory count accuracy increases since the “motion picture” scenario is eliminated, now becoming more efficient given the short duration of taking snapshots of the entire inventory.

For the Inventory Control team, validating the physical existence of inventory can now be done from the comfort of the office and in a shorter time span. For our business partners, the preparation and disruption brought about by the old way of conducting manual physical count is gone.

With this, we now have new ways to do old things. By eliminating the human aspect of the physical inventory process, the new digitized process allows for sustained social distancing.

SOURCE OF CONTENTS
Xilinx Asia Pacific Pte. Ltd.
www.xilinx.com