The Tech Behind DeepPack (Part 2)
September 21, 2022
September 21, 2022
In part 1 of this blog, we started discussing the technology behind DeepPack. In this part, we will go into more detail about DeepPack’s AI capabilities, and part 3 will cover the main features of the product.
Load Planning challenges
The logistics world provides for many extremely complex optimization problems. Bin packing and route planning are obvious examples. The number of possible packing configurations “explodes” exponentially with the number of items to be packed. It is impossible to find the optimal solution via the brute force method of enumerating all possible configurations and picking the best one (the number of possible configurations is orders of magnitude bigger than the number of atoms in the universe!).
Most companies handle very different types of goods, boxes, and Stock Keeping Units (SKUs), and have a panoply of constraints that need to be satisfied to ensure a shipment arrives efficiently, on time, and with no damage to the goods. It is the complexity of this problem that makes it so difficult for manual load planning to optimize space utilization while respecting each shipment’s real-life constraints.
DeepPack: Unlocking new planning strategies using RL
Reinforcement Learning (commonly abbreviated as RL), is an area of Machine Learning teaching an automated “RL agent” how to take suitable actions and maximize reward while facing a multitude of possible choices. RL Agents are trained with a reward and punishment mechanism: rewarded for correct choices and punished for the wrong ones. In doing so, an RL agent learns to minimize wrong choices and maximize the right ones.
We see applications of RL in fields such as self-driving cars, Industry automation, healthcare, and many more. The opportunities the technology presents to solve decision-making problems are endless.
InstaDeep, the company that developed DeepPack, is a leader in innovative industrial deployment, delivering best-in-class productized innovation to our partners (e.g.: BioNTech and Deutsche Bahn). We also regularly publish research and conference papers on the techniques we use.
In the context of load planning, RL provides a more robust technology than the classic OR and manual methods due to the complexity of the problem and the increasing number of constraints. It also allows for constant improvement as the model continues to learn from data and instances over time: the more you use DeepPack the better it gets!
Field experts have good “rules of thumb” for minimizing wasted space for given sets of constraints or item dimensions. Sophisticated heuristic solutions exist for given combinations of constraints but may perform poorly if some constraints are added/removed, or if the type of items being packed is not of specific dimensions, etc.
InstaDeep’s Bin packing software provides a self-adaptive solution for minimizing wasted space in containers while respecting real physics and operational constraints. We are effectively allowing our RL agent to “learn the best heuristic” based on real-world and synthesized data. These capabilities allow us to generalize over a wide range of different constraints so that our RL agent makes good choices in different scenarios, for example:
- With or without certain constraints,
- For loads with few distinct shapes but large counts for each shape,
- For loads with a large number of distinct shapes but small counts for each shape type.
DeepPack’s strength lies not only in its ability to beat traditional algorithms but also, and even more importantly, in its flexibility to easily incorporate further operational constraints, in a user-friendly platform hosted on a dedicated cluster of supercomputers. We’re constantly developing new features e.g. container selection, weight limit per area, etc. We also work with customers to develop their own version of the product, accounting for their specific constraints and goals.
If you want to test DeepPack on your own data, you can register for a Free trial.
If you have any questions, get in touch at email@example.com.
Share this article