Thursday, November 20, 2014

Shedding UV light on skin color

For a long time it was believed that dark complexion evolved to protect humans from skin cancer. However, this theory has a flaw: melanoma typically is contracted after the reproductive age. Hence, the traditional theory for complexion cannot be correct because melanoma does not impact evolution.

Recently Nina Jablonski has hypothesized that like chimpanzees, our ancient ancestors in Africa originally had fair skin covered with hair. When they lost body hair in order to keep cool through sweating, perhaps about 1.5 million years ago, their naked skin became darker to protect it from folate-destroying UV light.

Variation in complexion may have evolved to protect folate from UV irradiation

Neural tube birth defects such as spina bifida are linked to deficiencies in folate, a naturally occurring form of vitamin B; Nina Jablonski learned that sunlight can destroy folate circulating in the tiny blood vessels of the skin. Furthermore, lower vitamin D weakens the immune response to the mycobacterium that causes tuberculosis. With this there is a strong evolutionary explanation for complexion variation.

If you live in a Silicon Valley hacker dojo or wear a burka, do not forget your vitamin B pills!

Read the article in Science 21 November 2014: Vol. 346 no. 6212 pp. 934-936 DOI: 10.1126/science.346.6212.934

Friday, November 7, 2014

Sharp has developed a reflective liquid crystal display panel for wearable computer devices, such as smart-watches, that consumes 0.1% the energy of current backlit panels. The Japanese electronics maker will mass-produce the panel in Japan by next spring and ship it to device makers domestically and abroad.

The panels incorporate memory chips and can save power on retrieving data by storing images for a certain amount of time. Panels account for more than 30% of the power usage of current wearable devices. With the new model, devices will be able to function for 30% longer before recharging.

Nikkei Asian Review, 31 October 2014

Wednesday, October 22, 2014

LEGO-inspired microfluidic blocks from 3D printer

modular fluidic and instrumentation components

Pictured is a microfluidic system assembled from modular components that were fabricated using 3D printing at the USC Viterbi School of Engineering. Krisna C. Bhargava et al. used stereolithographic printing techniques to manufacture standardized, interchangeable, fluidic blocks of about 1 cm3 and assembled them by hand to produce variations of complex 3D circuits. Circuit behavior was predicted using design rules analogous to those used in electronic circuit design, and alleviated design limitations imposed by 2D circuit designs.

Microfluidic systems promise to improve the analysis and synthesis of materials, biological or otherwise, by lowering the required volume of fluid samples, offering a tightly controlled fluid-handling environment, and simultaneously integrating various chemical processes (applications include DNA analysis, pathogen detection, clinical diagnostic testing and synthetic chemistry). To build these systems, designers depend on microfabrication techniques that restrict them to arranging their designs in two dimensions and completely fabricating their design in a single step. This study introduces modular, reconfigurable components containing fluidic and sensor elements adaptable to many different microfluidic circuits. These elements can be assembled to allow for 3D routing of channels. This assembly approach allows for the application of network analysis techniques like those used in classical electronic circuit design, facilitating the straightforward design of predictable flow systems.

The authors devised computer models for eight modular fluidic and instrumentation components (MFICs), each of which would perform a simple task. They said that their work in developing these MFICs marks the first time that a microfluidic device has been broken down into individual components that can be assembled, disassembled and re-assembled repeatedly. They attribute their success to recent breakthroughs in high-resolution, micron-scale 3D printing technology.

Krisna C. Bhargava, Bryant Thompson, and Noah Malmstadt, Discrete elements for 3D microfluidics, PNAS 2014 111 (42) 15013-1501, doi:10.1073/pnas.1414764111

Wednesday, October 8, 2014

Sharp's new panel puts finer point on digital handwriting

Sharp has developed a touch panel that can accurately pick up text written with a ballpoint pen. This sets the device apart from conventional ones that require a stylus with a wider tip. The new liquid crystal display panel has roughly four times the sensitivity of hitherto existing models, making it the world's top performer in that regard, according to Sharp. The Japanese electronics company plans to start domestic mass production of the device next spring and make it a mainstay product in its LCD business.

Sharp's new panel works with ballpoint pens and mechanical pencils with 1mm or even narrower tips. The company's current production lines can make the panel.

Source: Nikkei

Friday, September 26, 2014

Unique hues

Color does not exist in nature, it is an illusion happening in our mind. Therefore, we can not solve color appearance problems with physics alone. The situation gets even hairier when we converse about color, because color names are just an ephemeral convention. For example, it is easy to "synonimize" blue and cyan or red and magenta.

Especially the latter is not out of the blue. Back in the days when I was doing research in color science, I had an OSA-UCS atlas. Periodically, I was getting updates of color tiles. One tile was standing out: the one that Boynton and Olson had determined to be the best red, with coordinates (L, j, g) = (–3.6, 1.5, –6.9), representing the consensus for the basic color term "red." Not only was it changing, but is was changing frequently. When I lined the updates up in the light-booth, the best red was shifting towards magenta.

I was so puzzled about the OSA changing the tile for the best red, that I called them up. I found out that they periodically where getting new tiles from their supplier Dupont when they developed new pigments. The OSA would then scale them visually and emit updates to the atlas. Therefore, the shift of basic red was a sign that industry was getting better at creating what humans consider to be the best red, and the hue was shifting from yellow towards magenta.

This indicates that the above synonymity is not arbitrary.

Art teachers are the ones most victimized by the non physicality of color, especially when they have to teach the difference between additive and subtractive color, as I wrote in "Is it turquoise + fuchsia = purple or is it turquoise + fuchsia = blue?" a few years ago. Recently I received the following question: do you have any ideas as to how I can prove that red, green, yellow and blue are the unique hues?

It is not that Goethe's yellow-red-blue (today we would write yellow-magenta-cyan) primary theory is wrong. His 1810 Farbenlehre and his battle against Newton with the theory of red-green-blue primaries was a little weird, especially considering Newton (1642–1727) had long been dead when Goethe started fighting against him.

Go back to the 1470s in Florence. The 1453 collapse of the Roman Empire of the Orient (Byzantium) had generated an influx of scientists fleeing from Constantinople to Venice, who took along their over 1500 year old libraries. Aldo Manuzio, exploited this trove of works and cheap scientists to create the new profession of "publisher" and popularized this knowledge that had gone lost in the west. This lead to the Renaissance in Italy, which essentially consisted in taking a field, studying all books about it, and formulating a theory about this field, which can be used to apply it over and over. An example was Niccolò Machiavelli studying politics and creating the field of political science.

At that time, somebody we would now call "engineer" did not go to study at a university but to a place called a "bottega" or "workshop" in English. One of the most famous ones in Florence at that time was the Bottega del Verrocchio, run by Andrea di Michele di Francesco de' Cioni. Verrocchio was his nickname and can be translated as "true eye."

In the pictorial arts, at that time the hot trend was to use very vivid colors and to try to reproduce reality as faithfully as possible. In Verrocchio's bottega, students would learn about pigments and binders and research the creation of new paints to realize highly vivid colors.

One of these students was a lad from the village of Vinci, called Leonardo. Leonardo was passionate about this quest for realism and would dissect cadavers at the hospital to learn how muscles and tendons worked anatomically. For color, he invented a new method to analyze a scene consisting in viewing it through a series of stained glasses (a common novelty in the street markets of the 1470s). Today we would call this a spectroradiometer.

From his studies, Leonardo da Vinci formed a new theory that color is not a physical quality mastered by studying pigment admixtures, rather it is a new phenomenon he called "perception." Perception is the capacity to have a sensate or sentient experience, what philosophers call quale (the plural is qualia). Quale is what something feels like; for example, the smell of a rose or the taste of wine as specific sensations or feelings. We experience the world through qualia, but qualia are not patterns of bits in memory, far from it.

From this he developed a methodology to paint the light illuminating a scene instead of the scene itself. The technique he developed was to paint in many layers with very little pigment. He called this technique "sfumato" and the result "sfumatura."

Working with the spectroradiometer, he realized that colors are not a discrete set with the names of pigments. Rather it was a set of opponents. Doing his drawings he realized, that white and black are actually colors (in 1475 they were not considered to be colors) and they are opposites he called "chiaro" and "scuro", resulting into a technique he called "chiaroscuro."

Once you realize colors are not related to pigments, you have to come up with a representation based on perception. This is where language comes in. When describing a color in 1475, you would say for example "this color is a vermilion with a hint of lapislazzuli" to describe a certain pink. So in 1475 the primary colors were the pure pigments, like oltremare (lapislazzuli), azzurro (azzurrite), indaco, tornasole, vermilion, gold, etc.

Some pigments like lapislazzuli were very expensive and others like azzurrite were more affordable. Admixture would allow you to obtain a given color with a cheaper paint that elicits the same perception. Therefore, Leonardo concluded that the old naming scheme is not useful. To find a representation, he then set out to find if there are any perceived colors that you would never describe as an admixture of two or more colors. This lead him to red, green, yellow, blue, black and white, which are his unique hues.

Further, he discovered that you would never use descriptions like reddish green, bluish yellow, or dark white. This is what he called the "color opponent system" and allowed him to describe perceived color as points in a 3-dimensional space. This is the basis for CIELAB and for the NCS color atlas. For the Munsell color tree there is an extra step for purple.

Leonardo da Vinci’s opponent colors, rendered here using his chiaro-scuro technique to suggest 3-dimensionality. Left: view from top. Center: view from slight left. Right: view from slight right.

With this you have the answer to the above question: you have to distinguish between the creation of color with colorants and the perception of color.

Friday, September 12, 2014

Towards a cure for macular degeneration

In macular degeneration capillaries grow out of control under the retina

Japanese researchers say they have conducted the world's first surgery using iPS cells, on a patient with a macular degeneration. The operation is seen as a major step forward in regenerative medicine. A team led by Masayo Takahashi from a RIKEN research lab in Kobe performed the operation on Friday with the cooperation of a team from the Institute of Biomedical Research and Innovation.

This is a simulation of what a person with macular degeneration might be seeing

The patient was a woman in her 70s with age-related macular degeneration, which involves a progressive decline in vision. The researchers obtained a small amount of the patient's skin cells and turned them into induced pluripotent stem cells. Using the iPS cells' ability to develop into any kind of body tissue, the team then transformed them into retinal tissue.

Patch of retinal tissue grown from the patient's iPS; this patch replaces a removed degenerated patch of the retina

Part of the patient's deteriorating retina was then surgically replaced with the iPS-derived tissue. The patient reportedly came out of anesthesia after being under for approximately 3 hours. The researchers told reporters the patient is recovering well in a hospital room. They added there has been no excessive bleeding or other problems.

Yasuo Kurimoto of the Institute of Biomedical Research and Innovation said he believes the surgery was successful. Masayo Takahashi of the RIKEN lab said she's relieved the surgery was completed safely. She added that although she wants to believe the first clinical case was a major step forward, much more development is needed to establish iPS surgery as a treatment method.

The researchers say the primary objective of the operation was to check the safety of the therapy. They say that since the patient has already lost most of her vision-related cells, the retinal transplant would only slightly improve her eyesight or slow its loss. But the researchers say the therapy could become a fundamental cure if its safety and efficacy can be confirmed by the transplant.

They plan to monitor the patient over the next 4 years. iPS cells were developed by Kyoto University Professor Shinya Yamanaka, who was awarded the 2012 Nobel Prize in Physiology or Medicine. This first-ever use of such cells in a human patient is seen as a major step forward for regenerative medicine — a kind of therapy aimed at restoring diseased organs and tissue.

Source: http://www3.nhk.or.jp/nhkworld/english/news/20140912_53.html

Tuesday, September 2, 2014

You only see what you want to see

Scientists sometimes have funny ways to name entities. Laymen then do not know if the topic is serious or their leg is being pulled. For example, in high energy physics the types of quarks are called flavors, and the flavors are called up, down, strange, charm, bottom, and top.

Molecular biologists tend to have even more bizarre ways to name their entities. For example, in wet color science to study top-down modulation in the visual system they may breed loxP-flanked tdTomato reporter mice with parvalbumin-, somatostatin-, or vasoactive intestinal peptide-Cre positive interneuron mice.

But then, these wet experiments in physiological research are very difficult and tedious. In practical color science, we mostly take a bottom-up approach, which most of the time works acceptably in engineering terms, but then can fail miserably in corner cases. More complete models are possible only when we take into account top-down processes, because in the visual system most information is transmitted top-down, not bottom-up.

Building models is a creative process in which one can easily get carried away, so in color science we have always to question the physiological basis for each model we propose. It is this physiological research that is very difficult. Recently a team from the University of California, Berkeley and Stanford University here in Palo Alto (Siyu Zhang, Min Xu, Tsukasa Kamigaki, Johnny Phong Hoang Do, Wei-Cheng Chang, Sean Jenvay, Kazunari Miyamichi, Liqun Luo and Yang Dan) have accomplished such a feat.

We often focus on a particular item out of a thousand objects in a visual scene. This ability is called selective attention. Selective attention enhances the responses of sensory nerve cells to whatever is being observed and dampens responses to any distractions. Zhang et al. identified a region of the mouse forebrain that modulates responses in the visual cortex. This modulation improved the mouse's performance in a visual task.

Before the work of Zhang et al., the synaptic circuits mediating top-down modulation were largely unknown. Among others, because long-range corticocortical projections are primarily glutamatergic, whether and how they provide center-surround modulation was unknown.

To examine the circuit mechanism of top-down modulation in mouse brain, Zhang et al. first identified neurons in the frontal cortex that directly project to visual cortex by injecting fluorescent latex microspheres (Retrobeads) into V1. They found numerous retrogradely labeled neurons in the cingulate area. To visualize the axonal projections from cingulate excitatory neurons, they injected adeno-associated virus [AAV-CaMKIIα-hChR2(H134R)-EYFP] into the cingulate.

Center-surround modulation of visual cortical responses induced by Cg axonstimulation after blocking antidromic spiking of Cg neurons

They discovered that somatostatin-positive neurons strongly inhibit pyramidal neurons in response to cingulate input 200 μm away. That they also mediate suppression by visual stimuli outside of the receptive field suggests that both bottom-up visual processing and top-down attentional modulation use a common mechanism for surround suppression.

Citation and link: Long-range and local circuits for top-down modulation of visual cortex processing. Siyu Zhang, Min Xu, Tsukasa Kamigaki, Johnny Phong Hoang Do, Wei-Cheng Chang, Sean Jenvay, Kazunari Miyamichi, Liqun Luo, and Yang Dan Science 8 August 2014: 345 (6197), 660-665. [DOI:10.1126/science.1254126]

Saturday, August 9, 2014

Photon Hunting in the Twilight Zone

Deep in the twilight zone of the ocean, small, glowing sharks have evolved special eye features to maximize the amount of light they see, researchers report this week in PLOS ONE. The scientists mapped the eye shape, structure, and retina cells of five deep-sea bioluminescent sharks, predators that live 200 to 1000 meters deep in the ocean, where light hardly penetrates.

The sharks have developed many coping strategies. Their eyes possess a higher density of rods than those of nonbioluminescent sharks, which might enable them to see fast-changing light patterns. Such ability would be particularly useful when the animals emit light to communicate with one another. Some species also have a gap between the lens and the iris to allow extra light in the retina, a feature previously unknown in sharks.

Claes JM, Partridge JC, Hart NS, Garza-Gisholt E, Ho H-C, et al. (2014) Photon Hunting in the Twilight Zone: Visual Features of Mesopelagic Bioluminescent Sharks. PLoS ONE 9(8): e104213. doi:10.1371/journal.pone.0104213

In the eyes of lantern sharks (Etmopteridae), the scientists discovered a translucent area in the upper socket. The researchers suspect this feature might help the sharks adjust their glow to match the sunlight for camouflage.

I wonder if we computer nerds will evolve our visual system similarly.

Citation (Open Access):

Claes JM, Partridge JC, Hart NS, Garza-Gisholt E, Ho H-C, et al. (2014) Photon Hunting in the Twilight Zone: Visual Features of Mesopelagic Bioluminescent Sharks. PLoS ONE 9(8): e104213. doi:10.1371/journal.pone.0104213

Tuesday, July 29, 2014

Traffic Lights and Visualization


Photo Attribution: Helgi Halldórsson

From Susanne Tak and Alexander Toet comes "Color and Uncertainty: It is not always Black and White" which finds that:

'A 'traffic light' configuration (with red and green at the endpoints and either yellow or orange in themiddle) communicates uncertainty most intuitively. '

A video presentation is also online.


Thursday, July 17, 2014

Tic-tac-toe patent 8,770,625 in color

As noted on lines 23 and 24 in column 4 of the printed version of patent 8,770,625,
the U.S. Patent Office procedure discourages the use of color drawings. This makes Fig. 4 a little hard to visualize for the non color scientist (there are no color figures in Wyszecki & Stiles), so here it is in color (right pane):

Figure 4 of US patent 8770625

The invention is relatively simple. The general field is anti-counterfeiting as it applies to packaging. Professional counterfeiters have no problem faking ordinary measures like serial numbers and holograms, so the trick is to embed information that cannot easily be perceived by a counterfeiter, hence is omitted in the facsimile. Fortunately color does not exist in nature, it is just an illusion happening in our minds. Therefore, all we have to do is to create an illusion you can only perceive if you expect it.

As described in patent 8,770,625, a number computed from the—possibly counterfeited—serial number on the package can be encoded positionally in a tic-tac-toe grid. The marking is just above the visual threshold, so the naive counterfeiter will reproduce the same pattern on all packages. The trained inspector can then quickly assert whether an actual positional code corresponds, for example, to the possibly fake serial number.

Patent 8,770,625 is relatively short with just three claims, but reducing it to practice is a little tricky, even when all the steps are disclosed in the patent. The difficult part is to design the tool to determine experimentally the visual thresholds for the print process being used and the light conditions under which the inspections are expected to happen. You need to be skilled in the art.

The above figure is a screen-shot of that tool. To implement it you need to write a spectral color management system with CIE colorimetry to simulate the press on the display and vision colorimetry to model what the actual human visual system perceives. The details of the controls are explained in patent 8,770,625.

Depending on your viewing conditions, the above color version of Fig. 4 might be under the visual threshold. If that is the case, in the figure below we crank up the saliency and decrease the background coverage, so you will see the encoding for sure. If you have aliasing problems, you can click on the figures to display them at the original resolution in which they were created eight years ago, early July 2006 (time flies).

a more salient alternate to figure 4 of US patent 877,625