Monday, February 29, 2016

Industrie 4.0

In our post on the Fourth Industrial Revolution last January, we showed a table from the World Economic Forum. It had 1784 as the date for the first industrial revolution, 1870 for the second, 1969 for the third, and a big question mark for the fourth.

Indeed, technology breakthroughs can rarely be pinned to a single date. In a post on a color scientist's role I had written that discoveries are in the air or ether. Discoveries happen when the time is ripe for them, and at that time, many people will have the same insight with a time interval of a few weeks or months. Research is very expensive, it is a high-risk investment. Timing is everything, otherwise, you lose your investment.

With that, I meant that technologies emerge over an interval of time. The widespread adoption of a revolutionary technology takes an order of magnitude longer than its emergence—giving a precise date is not possible.

When the industry was still artisanal, the force was supplied by waterwheels, donkeys, or indentured labor. Although this allowed mechanical production equipment, the force generation was unreliable and limited. In the first industrial revolution, steam allowed a steady supply of considerable mechanical force. It became possible to build factories with elaborate mechanical force distribution systems based on rods and belts. The layout of a factory floor was dictated by the force distribution system.

In the second industrial revolution, the electric motor made the design of the factory floor layout much easier, because what was distributed was the electricity, while the mechanical force was generated independently and locally at each workstation by electric motors. This allowed the distribution of labor, conveyor belts, and true mass production.

The third industrial revolution was the digitization of the entire design and production process through the use of CAD/CAM tools. Engineers could use interactive software to design new products. The design consisting of a software program meant that the program could be executed to generate simulations. It was not only possible to design smoother products with Bézier curves: the simulation could verify if a part was manufacturable by a CNC lathe and how long the process took. CAD/CAM programs were able to generate the control program for the CNC lathe, thus automating the entire manufacturing process.

If we have to put dates and names to the birthing of the fourth industrial revolution, it would be Kristen Nygaard (1957) and Ole-Johan Dahl (1962). When he was a visiting scientist at Xerox PARC, Nygaard told me how a newspaper trade union came to him for help with stressed out newspaper workers. The newspapers were introducing digital layout systems, which were supposed to be much more ergonomic that the old system of comps and rubylith. As a professor for operations research, he conducted research on planning, control, and data processing to sleuth the source of the stress.

The corollary was in 1962 Simula I, a superset of Algol 60, for simulating typesetting (discrete event networks). This was refined in Simula 67 with objects, classes, subclasses, virtual methods, co-routines, discrete event simulation, and garbage collection. In a first visit from Oslo to Palo Alto, this begat Smalltalk.

In 1973 came Hewitt & co-worker's extension to exploit massively parallel computers with the concurrent execution of objects. Communication was by message passing, decoupling the communication from the sender. The key new concept is that of an actor, a computational entity that, in response to a message it receives, can concurrently 1) send a finite number of messages to other actors, 2) create a finite number of new actors, and 3) designate the behavior to be used for the next message it receives.

In the actor model, messages are simply sent, there is no buffering, no synchronous handshaking, no ordering—FIFO requires explicit queue actor. Everything is local. These ideas were influenced by packet switched networks. The power comes also from the fact that a message can contain another actor, e.g., resumption, (a.k.a. continuation, stack frame) to which the recipient sends a response, enabling a variable topology.

This technology is also known as functional programming because the behavior is that of a mathematical function to express what an actor does when it processes a message.

To build the bridge to industry, the actor is a model for a machine or workstation. Variable topology means that each workstation can autonomously adapt the workflow depending on external factors (sensors). This requires refining the concept of modeling.

Modeling is the act of representing a system or subsystem formally. From a mathematical point of view, it is a set of assertions about properties of the system such as its functionality or physical dimensions. The constructive counterpart defines a computational procedure that mimics a set of properties of the system, also called executable mode or simulation.

With this, we can define design as the act of defining a system or subsystem. Usually, this involves defining one or more models of the system and refining the models until the desired functionality is obtained within a set of constraints.

Embedded software is software that resides in devices that are not first-and-foremost computers. A key feature of embedded software is that it engages the physical world, and hence has temporal constraints that desktop software does not share.

Executable models are constructed under a model of computation, which is the set of ``laws of physics'' that govern the interaction of components in the model. A set of rules that govern the interaction of components is called the semantics of the model of computation.

In the example of a printing plant, each actor can be a PDF transformer that performs the step of a workstation (e.g., imposition, folding, or trimming) by performing the corresponding operation on a PDF file. By combining the actors in a workflow, we can simulate the set of all print jobs in the printing plant and assert correctness. When timing information is collected, the simulation can find bottlenecks and deadlocks. Because rush jobs increase the profit, the simulation allows the workflow topology to reconfigure itself locally dynamically to maximize the profit function. Also, if there is a breakdown, e.g., a jam, the other workstations can reconfigure their use of queues.

An example of a work station is the raster image processor. We can change the algorithm for the black skeleton, gamut mapping, or halftoning. The actor can use simulation to select the most efficient algorithms. It considers the corresponding ICC profiles for the printer and can simulate giving the printer a virtual Farnsworth-Munsell 100-hue test, as shown in this diagram of the workflow in physical respectively simulation mode.

Workflow in physical respectively simulation mode

Returning to the concept of the fourth industrial revolution, in Germany, the brand name for this initiative is Industrie 4.0. They use slightly different terminology based on the concept of the twin. They would write:

Four aspects drive the future of manufacturing: modularity, connectivity, autonomy, digital twin. A digital twin of an autonomous system is a very realistic model of the current state of the process and its own behavior in interaction with its environment in the real world. It is a notion where the information created in each stage of the product lifecycle is seamlessly made available to subsequent stages.

The digital twin approach is the next wave in modeling, simulation and optimization technology. Simulation is extended to all life cycle phases as a core product or system functionality.

The concept of using “twins” is rather old. It dates back to NASA’s Apollo program, where at least two identical space vehicles were built to allow mirroring the conditions of the space vehicle during the mission. One vehicle remaining on earth was called the twin. NASA definition: A digital twin is an integrated multiphysics, multiscale simulation of a vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin. At Airbus, a digital twin is called an iron bird.