Thursday, July 21, 2016

Virtual energy production and delivery at IWB

In our post on the virtual print factory, we saw how an art director can be simulated and used in a simulated press check to optimize print production. There are other applications that can be revolutionized through the online simulation of production. Today, we have a look at the future of energy production and delivery. Society is improved by making it more efficient.

In 1852 Basel, the private company Gasindustrie was founded and in 1867 it was nationalized. Over the years other utilities were added: water delivery, water production, electricity, long-distance heating, refuse processing, and a fiber network for broadband internet and telephony. The name became IWB, forIndustrielle Werke Basel. It was privatized in 2010 (CEO David Thiel), but all shares belong to the Canton Basel-City. IWB is responsible for the supply of energy, water, and telecom; it has a mandate to optimize its operations (smart IWB 2020).

During industrialization, like in most countries, Switzerland's main energy source was coal. After World War I, not having coal mines, Switzerland boosted the education of engineers, who could then electrify the country. For example, the Crocodile locomotive was an engineering feat that could pull up a freight train on the Gottardo line. Actually, the regenerative braking energy from two trains could pull up one train on the other side of the Alps. When in the 1930s the regime in Germany started flexing its muscle and using its coal to wield power, Switzerland invested considerable brain power to wean away from coal as much as possible. For example, the cantonal buildings in Zurich are heated with heat pumps extracting heat from the Limmat.

The ETH cranked out generation after generation of skilled engineers who designed hydroelectric dams, turbines, and power distribution systems. Many plants were of the pump type, consisting of an upper and a lower reservoir: during the day water falls and generates power, while at night cheap electricity is imported from fixed throughput plants to pump the water back up.

This history is reflected in IWB's energy sources. In 2015, the energy sources for electricity in percent were

hydroelectric
96.14
wind
0.22
solar
0.14
other renewable
3.50

In the 4th quarter of 2015, on the European domestic market, the cost of a kilowatt-hour (kWh) of power was 3.3 cents. However, in Basel, at the public car charge boxes, the consumer price varies between 45 and 70 cents per kWh. This is an opportunity to increase efficiency. Smart IWB 2020 aims at reducing and stabilizing the end-user energy costs.

The old big central power plants will remain and keep producing and storing energy. New small decentralized systems have been built to produce and store energy at the regional level. Now, end-users are starting to produce, store, and consume energy. Excess energy is shared at the neighborhood level.

This is how a 1 MW lithium-metal-oxide battery in Zurich looks like:

1 MW lithium-metal-oxide battery in Zurich

End-users must also store energy in some form to even out the network dependence during the day. There may be excess solar energy in the afternoon and a lack of energy during a cold winter night.

each house has a facility to store excess energy

This is made possible by the new control network shown in dark gray in the figure below (for an animation see here). IWB can collect data everywhere on the network and feed it to its simulation that allows it to optimize the overall energy generation and conversion system.

Smart IWB 2020. The control network is shown in dark grey

Electricity companies have been using simulations for many years. For robustness, a distribution system cannot have a tree topology, because a failure at a node will black out the entire subtree. The required mesh topology is difficult to manage because the system has to be kept in equilibrium, otherwise, a failure will cascade to a blackout of the entire network.

What is new with smart IWB 2020, is that the regulation is no longer made by dropping more water when the network frequency drops under 49.8 Hz and by pumping up water when the frequency rises over 50.2 HZ. As the figure above shows, there are many more sources for electricity that have to be synchronized and balanced out.

In 2000, Germany introduced a law to subsidize renewable energies by guaranteeing the producers a profit, i.e., by taking out a major part of their risk to conduct business. In 2004, the European Union liberalized the power market, adding to the mix the windmill farms in Denmark among others. In Germany alone, renewable energy production surged from 6,277 GWh in 2000 to 153,000 GWh in 2015.

The availability of this low-cost renewable energy from the north wrecked havoc in the business model of the Swiss generators, who were generating expensive electricity during the day by draining the high reservoirs and importing cheap electricity at night to pump up the water from the low reservoirs. Today, solar plants in Germany deliver the maximum energy around noon, exactly the time when pump plants in the Alps used to generate the highest profits.

For example, at the time of this writing, the producer Alpiq can generate only ¼ of its hydropower at a profit, while ½ breaks even and ¼ is sold at a loss. On average, to Alpiq hydropower generation costs 6.5 cents per kWh, twice the European market price. Even at its newest generation facilities with the latest turbine designs, the cost is 3.8 respectively 4.5 cents per kWh. Alpiq expects that next year or the year after, the open market price will sink to 2 cents per kWh or even slightly less.

The numerous nuclear power plants in France and Switzerland, while also causing losses to the hydroelectric generators, cannot compete with the renewable sources. At the Gösgen nuclear power plant, production costs in 2014 were 3.4 cents per kWh. In 2015 they were 5.1 cents per kWh, but this was due to accounting changes and costs are supposed to sink again, but 3.4 > 2. Before the 2008 financial crisis, in Europe each year 7 GW of new power generation capacity was sold; after the crisis, sales dropped to 1.3 GW and in the past two years sales were less than 1 GW.

The solution is to use online simulations to not just optimize electric power generation, but all energy management: electricity, hot steam for electricity production, hot water for heating, and warm water for washing. Heat is produced by burning refuse, natural gas, biomass (wood refuse), etc. It is also recovered from data centers, instead of dispersing it in the atmosphere through air conditioning chillers. This photograph by Mathias Leemann shows the refuse burning plant of Basel.

Refuse burning plant Basel; photo by Mathias Leemann

Heat can be stored in water, soil, and stones, as has been done since Roman times. A more contemporary method used by IWB is the use of fuel cells. When there is excess electricity, electrolysis of water is used to generate hydrogen. Hydrogen is also produced from natural gas when consumption is low. This hydrogen is easy to store. When electricity prices are high, hydrogen fuel cells are used to generate electricity.

Coordinating and timing all these sources, stores, carriers, and consumers of energy is a very complex task. When IWB will sell its electricity at the public charge boxes (photograph by Simon Havlik) around the Canton for a much lower price than today's 45 to 70 cents per kWh, cars based on burning fossil fuels will disappear very fast. Such is the impact of smart energy management.

IWB charge box; photograph by Simon Havlik

So far, we have seen how the online simulation of a complex energy provision system can considerably reduce the cost of energy. However, this does not yet help with the goal of the 2000 Watt Society. If we build our houses with recycled glass shards in the outer concrete walls, then use 12 cm of insulation and cover it with 16 cm of solid wood on the inside, and also give up private ownership of cars, we might achieve a 3500 Watt Society.

50 years ago, people got by with less than 2000 Watt. Where is the problem? It is not at the individual level but at the society level. We have become much more mobile: if you live in Lugano, you not longer go to San Moritz for an extended weekend, but to Paris. Also, we have become digital packrats. All over the world, we have huge server farms that store all that digital media we never consume but is valuable for social network companies to dissect our lives and sell us stuff we do not really need.

Back to the virtual print factory:

virtual print factory

The output of the prepress stage is a PDF file. The two presses take raster images, therefore the computer in front of the press has to do the ripping and is called the digital front-end. In John L. Recker et al.; Font rendering on a GPU-based raster image processor; Proc. SPIE 7528 (January 18, 2010), the authors calculated that over a year of usage, the regular front-end RIP consumed 38,723 kWh and generated 23,234 Kg of CO2, while for the GPU-RIP they built, the corresponding numbers are 10,804 kWh and 6,483 Kg.

This is the kind of innovation that is required to achieve the 2000 Watt Society at the society level rather than at the individual level. There is still a lot of work to do. We recently wrote that the internet of things is a power guzzler: fortunately the cited report has some good advice.

Tuesday, July 19, 2016

Structure in Unstructured Data, Part 2

In the first part, we took a random walk from unstructured data to multimedia files, JPEG compression, a DCT-inspired classifier, and deep learning. We saw that the crux of supervised machine learning is the training.

There are two reasons for needing classifiers. We can design more precise algorithms if we can specialize them for a certain data class. For humans, the reason is that our immediate memory can hold only 7±2 chunks of information. This means that we aim to break down information into categories each holding 7±2 chunks. There is no way humans can interpret the graphical representation of graphs with billions of nodes.

As already Immanuel Kant noted, categories are not natural or genetic entities, they are purely the product of acquired knowledge. One of the functions of the school system is to create a common cultural background, so people learn to categorize according to similar rules and understand each other's classifications. For example, in the biology class, we learn to organize botany according to the 1735 Systema Naturæ compiled by Carl Linnæus.

As we know from Jean Piaget's epistemological studies with children, there is assimilation when a child responds to a new event in a way that is consistent with an existing classification schema. There is accommodation when a child either modifies an existing schema or forms an entirely new schema to deal with a new object or event. Piaget conceived intellectual development as an upward expanding spiral in which children must constantly reconstruct the ideas formed at earlier levels with new, higher order concepts acquired at the next level.

The data scientist's social role is to further expand this spiral.

For data, this means that we want to cluster it (recoding by categorization). Further, we want to connect the clusters in a graph so we can understand its structure (finding patterns). At first, clustering looks easy: we take the training set and do a Delaunay triangulation, the dual graph of the Voronoi diagram. After building the graph with the training set, for a new data point, we just look in which triangle it falls and know its category. Color scientists are familiar with Delaunay triangulations because they are used for device modeling by table lookup. Engineers use them to build meshes for finite element methods.

The problem is that the data is statistical. There is no clear-cut triangulation and points from one category can lie in a nearby category with a certain probability. Roughly, we build clusters by taking neighborhoods around the points and the intersect them to form the clusters. The crux is to know what radius to pick for the neighborhoods because the result will be very different.

This is where the relatively new field of algebraic topology analytics comes into play. It has only been about 15 years that topology has started looking at point clouds. Topology, an idea of the Swiss mathematician Leonhard Euler, studies the properties of shape independent of coordinate systems, dependent only on a metric. The topological properties are deformation invariant (a donut is topologically equivalent to a mug). Finally, topology constructs compressed representations of shape.

The interesting element of shape in point clouds are the k-th Betti numbers βk, the number of k-dimensional "holes" in a simplicial complex. For example, informally β0 is the number of connected components, β1 the number of roundish holes, and β2 the number of cavities.

Algebraic topology analytics relieves the data scientist from having to guess the correct radius of the point neighborhoods by considering all radii and retaining only those that change the topology. If you want to visualize this idea, you can think of a dendrogram. You start with all the points and represent them as leaves; as the radii increase, you walk up the hierarchy in the dendrogram.

This solves the issue of having to guess a good radius to form the clusters, but you still have the crux of having to find the most suitable distance metric for your data set. This framework is not a dumb black-box: you still need the skills and experience of a data scientist.

The dendrogram is not sufficiently powerful to describe the shape of point clouds. The better tool is the set of k-dimensional persistence barcodes that show the Betti numbers in function of the neighborhood radii for building the simplicial complexes. Here is an example from page 347 in Carlsson's article cited below:

(a) Zero-dimensional, (b) one-dimensional, and (c) two-dimensional persistence barcodes

With large data sets, when we have a graph, we do not necessarily have something we can look at because there is too much information. Often we have small patterns or motifs and we want to study how a higher order graph is captured by a motif. This is also a clustering framework.

For example, we can look at the Stanford web graph at some time in 2002 when there were 281,903 nodes (pages) and 2,312,497 edges (links).

Clusters in the Stanford web graph

We want to find the core group of nodes with many incoming links and the tied together periphery groups that are tied together and also up-link to the core.

A motif that works well for social network kind of data is that of three interlinked nodes. Here are the motifs with three nodes and three edges:

Motifs for social networks

In motif M7 we marked the top node in red to match the figure of the Stanford web.

Conceptually, given a higher order graph and a motif Mi, the framework searches for a cluster of nodes S with two goals:

  1. the nodes in S should participate in many instances of Mi
  2. the set S should avoid cutting instances of Mi, which occurs when only a subset of the nodes from a motif are in the set S

The mathematical basis for this framework are motif adjacency matrices and the motif Laplacian. With these tools, a conductance metric in spectral graph theory can be defined, which is minimized to find S. The third paper in the references below contains several worked through examples for those who want to understand the framework.

Further reading:

Monday, July 18, 2016

Structure in Unstructured Data, Part 1

In the context of big data, we read a lot about structured versus unstructured data. So far, so good. Things get a little murky and confusing when advanced analytics—which refers to analytics for big data—joins the conversation. The confusion comes from the subtle difference between "structured data" and "structure of data," which contain almost the same words. Both concepts are key to advanced analytics, so they often come up together. In this post, I will try shed some light on this murkiness to illuminate it.

The categorization in structured, semi-structured, and unstructured data comes from the storage industry. Computers are good a chewing on large amounts of data of the same kind, like for example the readings from a meter or sensor, or the transactions on cash registers. The data is structured in the sense that each record has the same fields at the same locations, for example on an 80 or 96 column punch card, if you want a visual image. This structure is described in a schema.

Databases are optimized for storing structured data. Since each record has the same structure, the location of the i-th record on the disk is i times the record length. Therefore, it is not necessary to have a file system: a simple block storage system is all that is needed. When instead of the i-th record we need the record containing a given value in a given field, we have to scan the entire database. If this is a frequent operation in a batch step, we can accelerate it by first sorting the records by the values in this field, which allows us to use binary search, which is logarithmic instead of linear.

Because an important performance metric is the number of transactions per second, database management systems use auxiliary structures like index files and optimized query systems like SQL. In a server-based system, when we have a query, we do not want to transfer the database record by record to the client: this leads to server-based queries. When there are many clients, often the same query is issued from various clients, therefore, caching is an important mechanism to optimize the number of transactions per second.

Database management systems are very good at dealing with transactions on structured data. There are many optimization points that allow for huge performance gains, but it is a difficult art requiring highly specialized analysts.

With cloud computing, it has become very easy to quickly deploy an application. The differentiation is no longer by the optimization of the database, but in being able to collect and aggregate user data so it can be sold. This process is known as monetization and an example is click-streams. The data is to a large extent in the form of logs, but their structure is often unknown. One reason is that the schemata often change without a notification because the monetizers infer them by reverse engineering. Since the data is structured with an unknown schema, it is called semi-structured. With the Internet of Things (IoT), also known as Web of Things, Industrial Internet, etc., a massive source of semi-structured data is coming to us.

This semi-structured data is high-volume and high-velocity. This breaks traditional relational databases because data parsing and schema inference become a performance bottleneck. Also, the indexing facilities may not be able to cope with the data volume. Finally, the traditional database vendor's pricing models do not work for this kind of data. The paradigms for semi-structured data are column based storage and NoSQL (not only SQL).

The ubiquity of smartphones with their photo and video capabilities and connectedness to the cloud has brought a flood of large data files. For example, when the consumer insurance industry thought it can streamline its operations by having insured customers upload images of damages instead of keeping a large number of claim adjusters in the field, they got flooded with images. While an adjuster knows how to document a damage with a few photographs, consumers take dozens of images because they do not know what is essential.

Photographs and videos have a variety of image dimensions, resolutions, compression factors, and duration. The file sizes vary from a few dozen kilobytes to gigabytes. They cannot be stored in a database other than as a blob, for binary large object: the multimedia item is stored as a file or an object and the database just contains a file pathname or the address of an object.

In juxtaposition to conventional structured data, the storage industry talks about unstructured data.

Unstructured data can be stored and retrieved, but there is nothing else that can be done with it when we just look at it as a blob. When we looked at analytics jobs, we saw that analysts spend most of their time munging and wrangling data. This task is nothing else that structuring data because analytics is applied to structured data.

In the case of semi-structured data, this consists in reverse engineering the schema, convert dates between formats, distinguish numbers and strings from factors, and dealing correctly with missing data. In the case of unstructured data, it is about extracting the metadata by parsing the file. This can be a number of tags like the color space, or it can be a more complex data structure like the EXIF, IPTC, and XMP metadata.

A pictorial image is usually compressed with JPEG and stored in a JFIF file. The metadata in a JPEG image consists of segments beginning with a marker, the kind of the marker, and if there is a payload, the length and the payload itself. An example of a marker kind is the type (baseline or progressive) followed by width, height, number of components, and their subsampling. Other markers are the Huffman tables (HT), the quantization tables (DQT), a comment, and application-specific markers like the color space, color gamut, etc.

This illustrates that unstructured data contains a lot of structure. Once the data wrangler has extracted and munged this data, it is usually stored in R frames or in a dedicated MySQL database. These allow processing with analytics software.

Analytics is about finding even deeper structure in the data. For example, a JPEG image is first partitioned in 8×8 pixel blocks, which are each subjected through a discrete cosine transformation (DCT). Pictorially, the cosine basis (the kernels) looks like this:

the kernels of the discrete cosinus transform (DCT)

The DCT transforms the data into the frequency domain, similar to the discrete Fourier transform, but in the real domain. We do this to decorrelate the data. In each of the 64 dimensions, we determine the number of bits necessary to express the values without perceptual loss, i.e., in dependence of the modulation transfer function (MTF) of the combined human visual system (HVS) and viewing device. These numbers of bits are what is stored in the discrete quantization table DQT, and we zero out the lower order bits, i.e., we quantize the values. At this point, we have not reduced the storage size of the image, but we have introduced many zeros. Now we can analyze statistically the bit patterns in the image representation and determine the optimal Huffman table, which is stored with the HT marker, and we compress the bits, reducing the storage size of the image through entropy coding.

Like we determine the optimal HT, we can also study the variation in the DCT-transformed image and optimize the DQT. Once we have implemented this code, we can use it for analytics. We can compute the energy in each of the 64 dimensions of the transformed image. As a proxy for energy, we can compute the variance and obtain a histogram with 64 abscissa points. The shape of ordinates gives us an indication of the content of the image. For example, the histogram will tell us, if an image is more likely scanned text or a landscape.

We have built a classifier, which gives us a more detailed structural view of the image.

Let us recapitulate: we transform the image to a space that gives a representation with a better decorrelation (like transforming from RGB to CIELAB). Then we perform a quantization of the values and study the histogram of the energy in the 64 dimensions. We start with a number of known images and obtain a number of histogram shapes: this is the training phase. Then we can take a new image and estimate its class by looking at its DCT histogram: we have built a classifier.

We have used the DCT. We can build a pipeline of different transformation followed by quantizations. In the training phase, we determine how well the final histogram classifies the input and propagate back the result to give a weight to each transformation in the pipeline by adjusting the quantization. In essence, this is an intuition for what happens in deep learning.

A disadvantage of machine learning is that the training phase takes a long time and if we change the kind of input we have to retrain the algorithm. For some applications, you can use your customers as free workers, like for OCR you can use the training set as captcha, which your customers will classify for free. For scientific and engineering applications you typically do not have the required millions of free workers. In the second part, we will look at unsupervised machine learning.

Wednesday, July 13, 2016

ZimRim

One way of categorizing computer users is to partition them into consumers and producers. Consumers follow their friends on social networks, watch movies, and read the news. Producers create the contents for the consumers, either as chroniclers or as copywriters.

The former enter small amounts of text into the device, so they typically give the finger to a smartphone or tablet; this finger often being a thumb or an index (thumbing). The latter need to be efficient when entering bulk data, so they typically use a desktop computer or a laptop because they come with a keyboard, allowing them to type with all ten fingers, without looking at the keyboard (touch typing).

Although a producer will mostly be touch typing, the user interfaces are mostly graphical and use a paradigm known as WIMP, for windows, icons, mice, and pointing. A mode or context change requires removing one hand from the keyboard to grab the mouse. Since this takes a longer time than moving a finger to a different key, GUIs have keyboard shortcuts. Mousing is exacerbated by today's big screens, which make it harder to locate the pointer.

Hypertext is based on links. A link is followed by clicking on it, which requires moving a hand from the keyboard to the mouse and finding the pointer. This can be annoying in activities like doing research using a search engine while summarizing the results and typing them into a text editor.

Life is easier when each link is labeled with a letter and a link can be followed by pressing that letter on the keyboard. This is what you can do with ZimRim, a free application from a Silicon Valley data science startup of the same name.

The result screen of ZimRim; click to enlarge

ZimRim's result screen is a scroll-free view with all 10 links appearing on one screen: on most laptops / desktops you do not need to scroll up and down to see and compare the links. A user can compare all 10 results in one glance and decide which are best fit for their query. It is clutter free with a uniform look and currently ad-free.

Results are opened in separate tabs so as to keep the results page open as the reference to open other links so you do not have to press "back" button. If results do not open, users should look for "popup blocked" message below the address bar and allow popups from this domain. Some browsers mistakenly block opening new tabs for result links thinking of those as potential popup ads.

ZimRim makes whole search experience "mouse optional" as a bonus for producers although consumers / mouse users can click the usual way.

Wednesday, June 22, 2016

9,355,339: system and method for color reproduction tolerances

In the early days of digital imaging for office use, there was a religion that the response of the system had to be completely neutral. The motivation came from a military specification that a seventh generation copy had to still look good and be readable. If there would be a deviation from neutrality, every generation would enhance this deviation.

When a Japanese manufacturer entered the color copier market, they enhanced the images to improve memory colors and boost the overall contrast. They were not selling to the government and rightfully noted that commercial users do not do multiple generation copies.

Color is not a physical phenomenon, it is an illusion and color imaging is about predicting illusions. Visit your art gallery and look carefully at an original Rembrandt van Rijn painting. The perceived dynamic range is considerably larger than the gamut of the paints he used because he distorted the colors depending on their semantics. With Michel Eugène Chevreul's discovery of simultaneous contrast, the impressionists then went wild.

When years later I interviewed for a position in a prestigious lab, I gave a talk on how—based on my knowledge of the human visual system (HVS)—I could implement a number of algorithms that greatly enhance the images printed on consumer and office printers. The hiring engineering managers thought I was out of my mind and I did not get the job. For them, the holy grail was the perfect neutral transmission function.

This neutral color reproduction is a figment of imagination in the engineer's minds in the early days of digital color reproduction. The scientists who earlier invented the mechanical color reproduction did not have this hang-up. As Evans observed on page 599 of [R.M. Evans, “Visual Processes and Color Photography,” JOSA, Vol. 33, 11, pp. 579–614, November 1943], under the illuminator condition color constancy mechanisms in the human visual system (HVS) correct for improper color balance. As Evans further notes on page 596, we tend to remember colors rather than to look at them closely; for the most part, he notes, careful observation of stimuli is made only by trained observers. Evans concludes that it is seldom necessary to obtain an exact color reproduction of a scene to obtain a satisfying picture, although it is necessary that the reproduction shall not violate the principle that the scene could have thus appeared.

We can interpret Evans’ consistency principle (page 600) as what is important is the relation among the colors in a reproduced image, not their absolute colorimetry. A color reproduction system must preserve the integrity of the relation among the colors in the palette. In practice, this suggests that three conditions should be met. The first is that the order in a critical color scale should not have transpositions, the second is that a color should not cross a name boundary, the third is that the field of reproduction error vectors of all colors should be divergence-free. The intuition for the divergence condition is that no virtual light source is introduced, thus supporting color constancy mechanisms in the HVS.

We all know the Farnsworth-Munsell 100 hue test. The underlying idea is that when an observer has poor color discrimination, either due to a color vision deficiency or due to lack of practice, this observer will not be able to sort 100 specimen varying only in hue. In the test, the number of hue transpositions is counted to score an observer.

Farnsworth-Munsell 100 hue test

We can considerably reduce the complexity of a color reproduction system if we focus on the colors that actually are important in the specific images being reproduced. We can minimally choose the quality of the colorants, paper, halftoning algorithm, and the number of colorants to just preserve the consistency of that restricted palette.

First, we determine the palette, then we reproduce a color scale with a selection of the above parameters and we give the resulting color scale image a partial Farnsworth-Munsell hue test. This is the idea behind invention 9,355,339. The non-obvious step is how to select color scales and how to use a color management system to simulate a reproduction. The whole process is automated.

A graphic artist manually identifies the locus of the colors of interest, which is different for each application domain, like reproductions of cosmetics or denim clothes. A scale encompassing these critical colors is built and printed in the margin. In the physical application, the colors are measured and the transpositions are counted.

complexion color scale

The more interesting case is a simulation. There is no printing, just the application of halftoning algorithms and color profiles. The system extracts the final color scale and identifies the transpositions compared to the original. Why is the simulation more important?

Commercial printing is a manufacturing process and workflow design is crucial to the financial viability of the factory. Printing is a mass-customization process, thus every day in the shop is different. This makes it impossible to assume a standard workload.

For each manufacturing task, there are different options using different technologies. For example, a large display piece can be printed on a wide bubblejet printer, or the fixed background can be silkscreen printed and the variable text can be added on the wide bubblejet printer. Another example is that a booklet can be saddle stitched, perfect bound, or spiral bound.

print fulfillment diagam

In an ideal case, a print shop floor can be designed in a way to support such reconfigurable fulfillment workflows, as shown in this drawing (omitted here are the buffer storage areas).

example of a print fullfillment floor rendering; the buffer zones are omitted

However, this would not be commercially viable. In manufacturing, the main cost item is labor. Different fulfillment steps require different skill levels (at different salary levels) and take a different amount of time.

Additionally, not all print jobs are fulfilled at the same speed. A rush job is much more profitable than a job with a flexible delivery time. To stay in business, the plant manager must be able to accommodate as many rush jobs as possible.

This planning job is similar to that of scheduling trains in a saturated network like the New Railway Link through the Alps (NRLA or NEAT for Neue Eisenbahn-Alpentransversale, or simply AlpTransit). For freight, it connects the ports of Genoa and Rotterdam, with an additional interchange in Basle (the Rotterdam–Basel–Genoa corridor). For passengers, it connects Milan to Zurich and beyond. There are two separate basis tunnel systems: the older Lötschberg axis and the newer Gotthard axis. In the latter, freight trains travel at least at 100 km/h and passenger trains at least at 200 km/h. These are the operational speeds, the maximal speed is 249 km/h, limited by the available power supply.

The trains are mostly of the freight type, traveling at the lower operational speed. A smaller number of passenger trains travels at twice the operational speed. A freight train can be a little late, but a passenger train must always be on time lest people miss their connections, which is not acceptable.

The trains must be scheduled so that passenger trains can pass freight trains when these are at a station and can move to a bypass rail. However, there can be unforeseen events like accidents, natural disasters, and strikes that can delay the trains on their route. The manager will counter these problems by accelerating trains already in the system above their operational speed when there is sufficient electric power to do so. This way, when the problem is solved, there is additional capacity available.

The problem with modifying the schedule is that one cannot just accelerate trains: new bypass stations have to be determined and the travel speeds have to be fine-tuned. Train systems are an early implementation of industry 4.0 because the trains also automatically communicate between each other to avoid collisions and to optimize rail usage. For AlpTransit this required solving the political problem of forcing all European countries to adopt a new ERTMS/ETCS (European Train Control System) Level 2, to which older locomotives cannot be upgraded.

The regular jobs and rush jobs in a print fulfillment plant are similar. The big difference is that the train schedule is the same every day, while in printing each day is completely different. The job of the plant designer is to predict the bottlenecks and do a cost analysis to alleviate these bottlenecks. In particular, deadlocks have to be identified and mitigated. There are two main parameters: the number and speed of equipment, and the amount of buffer space. Buffering is highly nonlinear and cannot be estimated by eye or from experience. The only solution is to build a model and then simulate it.

We used Ptolemy II for the simulation framework and wrote Java actors for each manufacturing step. To find and mitigate the bottlenecks, but especially to find the dreaded deadlock conditions, we just need to code timing information in the Java actors and run Montecarlo simulations.

We used a compute cluster with the data encrypted at rest and using Ganymed SSH-2 for authentication with certificates and encryption on the wire. Each actor could run on a separate machine in the cluster. The system allows the well-dimensioned design of the plant, its enhancement through modernization and expansion, and the daily scheduling.

So far, the optimization is just based on time. In a print fulfillment plant, there are also frequent mistakes in the workflow definition. The workflow for a job is stored in a so-called ticket (believe me, reaching a consensus standard was more difficult than for ERTMS/ETCS). One of the highest costs in a plant is an error in the ticket, which causes the job to be repeated after the ticket has been amended. With this, risk mitigation through ticket verification is a highly valuable function, because it allows a considerable cost reduction for not having to allocate insurance expenses.

While in office printing A4 or letter size paper are the norm, commercial printers use a possibly large paper size to save printing time and, with it, cost. This means there are ganging and imposition, folding, rotating, cutting, etc. It is easy to make an imposition mistake and pages end up in the wrong document or at the wrong place. Similarly, paper can be cut at the wrong point in the process or folded incorrectly.

Once we have a simulation of the print fulfillment factory, we can easily solve these workflow problems, thus reducing risk and with it insurance cost. The data for print jobs is stored in portable document format (PDF) files. For each workstation in a print fulfillment plant, we take the Java actor implemented for the simulation and add input and output ports for PDF files. We then implement a PDF transformer for each workstation that applies the work step to the PDFs. There can be multiple input and output PDFs. For example, a ganging workstation takes several PDF files and outputs a new PDF file for each imposed sheet.

Most errors happen when a ticket is compiled. After simulating the workflow, the operator simply checks the resulting PDF files. A mistake is immediately visible and can be diagnosed by looking at the intermediate PDF files. A more subtle error source is when workstations negotiate workflow changes in the sense of the industry 4.0 technology. Before the change can be approved, the workflow has to be simulated again and the difference between the two output PDF files has to be computed.

A more valuable, but also more complex workflow change is accommodating rush jobs by taking shortcuts. For example, if there is a spot color in the press, can we reuse the same color or do we have to clean out the machine to change or remove the spot color? Another example is the question of using dithering instead of stochastic halftoning to expedite a job. Finally, earlier we mentioned the possibility of running a fixed background through a silkscreen press and printing only the variable contents on a bubblejet press.

In a conventional setting, any such change requires doing a new press-check and having the customer come in to approve it. In practice, this is not always realistic and the owner will use his best judgment so self-approve the proof.

9,355,339 automates this check and approval. The ICC profiles are available and can be used to compute the perceived colors in each case. The transposition score for the color scale (there can be more than one) can predict the customer's approval of the press-check.

9,355,339 automates the press-check and approval

Thus, we have created a Java actor that simulates a human and can predict the human's perception. This is an industry 4.0 application where we not only have semantic models for the machines but also for the humans, and the machines can take the humans into consideration.

simulating the human in the loop

Wednesday, June 15, 2016

The internet of things is a power guzzler

A study commissioned by the International Energy Agency and conducted by Prof. Alexander Klapproth's iHome Lab at the Lucerne University of Applied Sciences made a sobering determination: The internet of things is a power guzzler! Instead of saving energy, that light switch that today does not use any power when connected to the internet uses a lot of power.

The study found that today, the about 10 billion IoT devices deployed worldwide suck up 10 TWh of electricity. By 2025, they project a total energy waste of 46 TWh. 78% or 36 TWh will be wasted by home automation devices, followed by connected home appliances with 15% or 7 TWh, and smart lighting with 7% or 3 TWh.

Standby!energy!impact!forecast!of!investigated!IoT!applications

The main culprit is the power supply, which wastes more energy than the internet control device itself. The latter also uses too much power because inappropriate communication technologies are used.

Available!communication!technologies!for!IoT!applications!–!range!vs.!data!rate

Fortunately, energy-saving technologies already exist, as proven by smartphones, laptop computers, and other personal digital assistants. All the IoT vendors have to do, is to hire engineers with expertise in power management.

M. Friedli, L. Kaufmann, F. Paganini, and R. Kyburz. Energy efficiency of the internet of things. Technology and energy assessment report prepared for IEA 4E EDNA, iHomeLab, Lucerne University of Applied Sciences, Switzerland, April 2016.

Tuesday, May 31, 2016

Compression on Newell Road

Silicon Valley's Newell Road starts at West Bayshore Road in East Palo Alto. After crossing the narrow bridge over the San Francisquito Creek into Palo Alto, it continues south-southwest then dead south to the public library and the art center. After crossing Embarcadero Road, it proceeds southeast and terminates at David Starr Jordan Middle School.

Jordan had become famous around 1977 because Xerox PARC gave it an Alto—which it called "interim Dynabook"—catapulting the pupils in a new world of windows, icons, mice, and pointing (WIMP), as described by Howard Rheingold.

Maybe, the most famous person to live on Newell Road was Dr. Norman J. Lewiston, a professor of pediatrics at the Stanford University School of Medicine who had three legal wives and whose life was made into a movie (The Man with Three Wives, his last name was changed to Grayson). Newell Road was where he lived with his original wife and three children.

Late Spring 1994, Yoko held one of her BBQs on her patio on Newell Road, which was attended by the group of researchers working on color fax. In this technology, the page images are compressed with JPEG. At the receiving end, the pages looked fuzzy, and the group was discussing how to improve the image quality.

The usual sharpening kernels could not be used because they would sharpen prominently the compression artifacts and the image quality was worse. Maybe, it was the Chardonnay that made the group come up with one of those crazy ideas: why not do the sharpening in the compressed domain, where the artifacts are explicit. Essentially, the original is boosted, and this is accomplished by using different DQT tables for compressing and decompressing, a trick that made the operation computationally free.

The method, described in US 5850484 A worked like a charm and other algorithms working in the compressed domain ensued, for example, to change the lightness or the contrast. Konstantin optimized the Huffman code and got an additional 14% compression for the color fax protocol.

This method improved the image quality, but it did not sufficiently reduce the file size, which was a showstopper. Indeed, it took 6 minutes to transmit an A4 page, but the upper time limit was considered to be 2 minutes. Optimizing the Huffman tables was not enough.

A few of Yoko's BBQs later, the group realized that visually lossless compression was too strict and made a run for perceptually lossy JPEG compression. The idea was to compress the image much more, but to move the artifacts to where the image would admittedly look bad, but this would not affect the readability, i.e., the reading performance for a human.

The heuristic process behind this method is interesting because it is the same that is behind today's deep learning algorithms for image recognition. In 1994, the hardware was not ready for that.

heuristic process for image recognition in deep learning

This led to patent US 5883979 A, finally making the new color fax technology viable. However, the employer was not convinced of the product and pulled the plug, only to resuscitate it a couple of years later. By then the momentum had been lost and the market window was closing. At the end, the only thing that came out of the Newell Road BBQs was the book Image and Video Compression Standards: Algorithms and Architectures.

All this work was on lossy compression. The work on lossless compression was a couple of years later, 3.1 km further northwest along Middlefield Road, in Menlo Park. There, the work did not take place on a patio but at Peter's kitchen table. The issue was that Unisys was enforcing its LZW patent. Peter used LZ, which was not patented, and followed it by Huffman encoding, obtaining a better compression rate than LZW.

In compression, it is customary to standardize the decoding of the signal, not the encoding. This leads to the somewhat funny name of FLATE for the method, although the RFC calls for DEFLATE. File formats like PNG and PDF use FLATE.

There was a related event on Newell Road in 2002 when in Yoko's garage the US National Body's proposal for color preference semantics was written (references USNB-CD-DIA-044 to 051). This became part of the MPEG–21 standard for color vision deficiency in the digital item adaptation (DIA) part. However, although this is part of MPEG: it has nothing to do with compression.

This being Palo Alto, people are running all kind of stuff in their garages. For example, at one point a neighbor was running Napster from his garage for a couple of months. However, that was at the time Palo Alto was running its Fiber to the Home (FTTH) pilot and about 100 houses in our neighborhood had 100 Mbps symmetric Internet connections. Legal action was taken by the private industry and the conduits many of us ran from the curb to the utility entrance in January 2000 remain empty and the fiber under Newell Road is dark. In our case, we get by with a 6 Mbps asymmetric VDSL line with 300 GB data cap that costs us $77.44 per month for a double play including VoIP. On Newell Road in Silicon Valley's Palo Alto, Internet access is pretty miserable and expensive.

Newell Road was named for Dr. William A. Newell, a prominent physician in San Francisco, who bought 47 acres from Henry Seale in 1864 for a country estate. Parts of his house, located at 1456 Edgewood Drive, date back to 1866. In 2011, Mark Zuckerberg bought 1456 for $7 million. The updated carriage house and "cow barn," once part of the estate, remain next door at 1450.

Henry Seale, like John Greer, came from Ireland and started a contracting business in San Francisco with his brother, Thomas. In 1853 the Seale brothers acquired a large part of Rancho Rinconada del Arroyo del San Francisquito from the Soto heirs. The Seales at one time owned most of the land on which early Palo Alto was located.

Tuesday, April 19, 2016

Different analytics tools for different jobs

When people talk about analytics jobs, they usually have a mental picture of a single job and skill set. They talk about analysts, or data analysts (in the Silicon Valley they may be called data scientists). However, we can structure the users of analytics tools by the kind of job they have. The builders of these tools must then have the same skills, but at a much deeper level.

The first job type is the office worker. Today, every employee is expected to be able to produce some analytics. The basic tools include the office suites from Microsoft, Google, or Apple. Proficiency in more specialized tools like Adobe Illustrator, InDesign, Acrobat, FileMaker, and Tableau are a plus. The worker is expected to be able to convert data between formats like CSV and Excel. Workers are typically given assignments like “Prepare a presentation explaining our performance and suggesting how it can be improved.” Therefore, office workers must be able to produce visualizations, where visualization refers to being able to produce graphics from tables using Adobe Illustrator, Microsoft Excel, and PowerPoint. By nature, the office workers are domain experts in their daily activities.

The second job type is that of a data analyst in a traditional company. The all-round data analyst must be proficient in a relational database system like MySQL, and in Excel. The analyst must also have a good understanding of descriptive statistics. A key skill is to be an expert in munging data across applications and file formats; this is also known as data shaping, wrangling, ETL, etc. The required statistical expertise is not deep, but basic A/B testing and Google Analytics experience are required. Presenting and selling the results of an analysis are very important, requiring the ability to be able to do basic data visualization in Excel and Tableau. The data analyst has to have a good understanding of the company’s products and general all-round skills.

The third job type is that of an analyst in a start-up company, where a typical assignment may sound like "please munge our data." This requires proficiency in the basic tools and the ability to move fast: go for the low-hanging fruits and be able to quickly implement a new analysis or visualization by writing Excel macros, Access programs, or R functions, which in turn requires a good knowledge of the available libraries in Excel, R, or Tableau. The data analyst in a start-up company must be proficient in the implementation of advanced parsers and creating ad hoc MySQL databases for persistent storage. Basic statistics knowledge, for example, contingency tables and Poisson tests, are also a must. Since a start-up does not have historical data, the analyst must be able to do the ground-truthing by themselves. As a lot of the data may come from social networks, this job type also requires the ability to use linguistics functions to clean up unstructured text and extract useful information.

An analyst in a data company has a completely different job. Here data is the product: “we are data — data is us.” This requires a formal background in mathematics, statistics, machine learning, or linguistics (natural language processing, NLP). The analyst must be able to discriminate among the various algorithms and understand their parameters. On the bright side, most data is already munged, but the analyst must be able to customize parsers and workflows. Understanding privacy laws is a must, especially the European ones because the internet has no borders, but the laws have and the fines can be debilitating. The analyst in a data company must have a good sense of emerging techniques, like topological data analysis.

The fifth job type is that of analysts in an enterprise, where they are members of an established data team with experts in various tools. By enterprise here we mean a reasonable sized non-data company who is data-driven, to distinguish it from the second job type. The work is about data, but data is often not central to the product. An example is the fourth industrial revolution, or industry 4.0. This analyst is a generalist with broad experience, a jack-of-all-trades. For survival, this analyst must be able to find blind spots where niche roles can be played. It requires heavy experience in munging and aggregating data from all possible sources: SQL and NoSQL, logs, IoT, social networks (Twitter, LinkedIn, Facebook, etc.), news feeds, REST services, data.gov, Google Public Data Explorer, etc.

We can summarize these job types and the skills they require in this table:

Skills for analytics jobs

This is a generalization and it can be debated. For example, graph theory is topology, actually the historical beginning of it, but topological data analysis focuses on point clouds to build graphs, while traditional graph theory uses completely different mathematical tools to analyze graphs which is why I listed them as two different items. One could also make this list summarizing skills:

  • Tools of the trade: SQL, R, Java, Scala, Python, Spark, MapReduce, …
  • Basic statistics: distributions, maximum likelihood estimation, statistical tests, regression, …
  • Machine learning: k-nearest neighbors, random forests, …
  • Linear algebra and multivariate calculus
  • Data munging: imputation, parsing, formatting; aka wrangling, shaping
  • Data visualization and communication: Tableau, ggplot, d3.js
  • Software engineering: logging, performance analysis, REST interfaces, connectors, …
  • Curiosity for emerging technologies, like algebraic topology
  • Thinking like a data scientist: business sense, approximations, teamwork, …

Monday, April 18, 2016

Data Shamans

The benefit of attending meet-ups and conferences is that, compared to papers and webinars, you can hear and understand questions and you can talk to the speakers and other audience members during the breaks. Especially in conferences, in the formal presentation, you hear a scientific report, while in the breaks you can learn all the false turns the researchers have taken in their endeavors but have no place in short scientific communications.

As I mentioned last October regarding the ACM Data Science Camp Silicon Valley, the field of advanced analytics is full of hype. Data scientists are perceived like demigods, but in reality, their employment can be insecure and harsh.

Indeed, I often hear from data scientists that they are treated like shamans, i.e., a person regarded as having access to, and influence in, the world of benevolent and malevolent spirits, who typically enters into a trance state during a ritual, and practices divination and healing.

When the organization has a problem it cannot solve and the scientists or engineers are at the end of their wit, they collect big data and deposed it a the feet of their data scientists in the hope to get a miracle by the next day. A problem can only be solved when the causality is known, and correlation does not imply causality. There is no magic algorithm the data scientists can throw at the data and solve the engineering riddle.

In the end, the data scientists have to be able to go back to first principles. However, their training and experience make them more diffident to project preconceptions into the data, and their toolbox allows them to formulate hypotheses and test them statistically more efficiently. There are no data shamans.

Not a data shaman

Thursday, April 14, 2016

The Shape of Data

Last week I attended a San Francisco Bay Area ACM Chapter event at Pivotal Labs, which now occupies one of the former HP Labs buildings up on Deer Creek Road. The speaker was Gunnar Carlsson and the topic was algebraic topology analytics. I was waiting to write this post until the slides would be posted, but they never materialized—maybe the fancy rendering of a coffee mug metamorphosing into a topologically equivalent donut broke the system.

I must admit that what attracted me the most to attend was to see how Gunnar Carlsson would give a presentation on a very arcane topic requiring intimate familiarity with Betti numbers, functional persistence barcodes, simplicial complexes, and Vietoris-Rips complexes to the 244 registered attendees, probably mostly lacking the necessary mathematical background. I was also curious to see if he would reveal some of the fast algorithms which he must have invented to perform the very complex calculations involved.

He did a superb job! After the demonstration that for mathematicians coffee mugs and donuts are equivalent, he had everybody's attention. He then showed some examples of how conventional methods for regression and cluster analysis can fail and lead to completely incorrect conclusions, leaving the task of understanding topological pattern recognition for point cloud data as an exercise.

Gunnar Carlsson started by noting that big data in not about "big" but about complexity in format and structure. Data has shape and shape matters. Therefore, the task of the data scientist is not to accumulate myriad data, but to simplify the data in a way that the shape is easily inferred.

Consider for example the point cloud on the left side of the figure below. You can import it in your favorite analytics program and perform a linear regression. This simplifies the data to two parameters: a slope and an intercept. However, if you look more carefully, you see that this an incorrect representation of the data. Indeed, the point cloud is on two intersecting lines, therefore, the green cross at the right is a more accurate representation of the data's shape.

A linear regression would give an incorrect result

A second example is the confusion between recurrent data and periodic data. People tend to equate them and then use Fourier analysis on recurrent data that is not periodic, getting meaningless results. Recurrence is a concept from chaos theory and does not imply regular cycles, like the El Niño Southern Oscillation (ENSO).

The solution is to use topological modeling, like in the figure. If you are older, you need to revisit topology, because the field has started to study point clouds only in the last 20 to 15 years.

The first step in a project is to determine a relevant distance metric. Examples include Euclidean distance, Hamming distance, and correlation distance. The distance metric should be such that it is sensitive to nearby events but not so much to far away events because the interesting stuff happens close by: consider for example a distance metric based on the statistical momenta.

The output of an algebraic topology analysis is not a set of algebraic formulæ but a network.

For exercising, Carlsson recommends the World Values Survey, which contains a lot of interesting data. When you play with the data, it is often useful to consider a topological model of the space of columns rather than the rows in a data set.