This is an important question to ask about artificial intelligence and machine learning. A lot of advanced tech gets labeled AI, but there are vast differences between a warehouse robot with machine learning capabilities and a generative AI system.
A good example is Stretch, our artificially intelligent box mover, made by our partners at Boston Dynamics. Its primary function is unloading trucks or containers, picking up boxes, and moving them elsewhere. Stretch uses several actions to complete this task: various grips, arm movements, etc. And it does all this while interpreting visual input – kind of like a human. Before Stretch saw a warehouse, it tested random combinations of actions in pursuit of a ‘reward’ (the positive outcome of moving a box). A dropped package was ‘penalized’ (recorded as a negative outcome to be avoided). Unnecessary movements were also penalized.
In other words, Stretch learned by trial and error – again, kind of like a human. The main difference is that we use our innate intelligence to figure things out. AI is the use of a robot ‘brain’ (algorithm) to figure out how to get the rewards and avoid penalties. Now, when we present Stretch with a wall of boxes, it knows which one to pick up first. And when we put Stretch in a new warehouse, the robot adapts to the unique situation.
Generative AI – as the name implies – generates something new. Ask ChatGPT to write a poem about a friendly gray cat wearing a yellow and red uniform delivering parcels on a tricycle, and it will do just that. How? It learned by analyzing a massive language dataset (called a large language model, or LLM).
The exact figures haven’t been made public, but ChatGPT’s LLM probably has some 45 terabytes of text. One terabyte is roughly 500 billion words (500,000,000,000) – far more text than any human could ever read and retain. The data is unlabeled, but by observing and categorizing specific linguistic patterns and recurring elements, ChatGPT can recreate strings of words that humans recognize as poetry.