Sunday, January 4, 2015

International Year of Light

Here in Switzerland the weather tends to be bad and we have a Zwinglian/Calvinistic Leitkultur, which might explain our tendency towards pessimism and feeling more unlucky than lucky: it is customary to first look at the negative side of things and then to let us be surprised and feel lucky when things turn out to be positive. In this context, nobody is surprised when the newspapers announce the new year by listing negative anniversaries: 700 years Morgarten, 500 years Marignano, 70 years end of World War II.

Today's Zürich is home to many computer science labs and the city has as many nerds as gnomes (the equivalent persona in banking). They may see 2015 as the year of the palindrome, because 201510 = 111110111112. Or the many mathematicians in Zürich will see 2015 as a Japanese cube, because in the Japanese calendar it is 平成27年 or Heisei 27 = 33. For movie buffs, this year is MMXV.

For color scientists, 2015 is the International Year of Light and Light-based Technologies, a United Nation observance that aims to raise awareness of the achievements of light science and its applications, and its importance to humankind. The IYL 2015 will launch at the UNESCO headquarters in Paris on 19 January 2015, with the unveiling of 1001 Inventions and the World of Ibn Al-Haytham.

Indeed, 2015 marks the anniversaries of several events related to light, optics, and vision:

  • 1015, a millennium ago, the Iraqi scientist Ibn Al-Haytham published his Book of Optics
  • 1815 Augustin-Jean Fresnel proposed the notion of light as a wave
  • 1865 James Clerk Maxwell proposed the electromagnetic theory of light propagation
  • 1915 Albert Einstein embedded his 1905 theory of the photoelectric effect into cosmology through general relativity
  • 1965 Arno Penzias and Robert Woodrow Wilson discovered the cosmic microwave background
  • 1965 Charles Kao theorized and proposed to use glass fibers to implement optical broadband communication

In ancient Greece, there where two competing theories of vision. One theory was called the emission theory (Euclid, Ptolemy) and claimed that vision worked by little flame exiting the eye, traveling on rays, scanning the objects in the visual field, and traveling back to the eye reporting what they detected. In the intromission theory (Aristotle), when an object is looked at, it replicates itself and the replica travels along a ray into the viewer's eye, where it is seen.

For a millennium, there was a raging discussion of whether the emission theory or the intromission theory was the correct one. This discussion was based purely on theoretical considerations and heuristics. In his 1015 book, Ibn Al-Haytham introduced the modern concept of scientific research based on experimentation and controlled testing that we still use today: a hypothesis is formulated, an experiment is conducted varying the parameters, the results of the experiment are discussed, and the conclusions are drawn. Because of this, Ibn Al-Haytham is often referred to as the first scientist.

Using the scientific method, Ibn Al-Haytham developed the first plausible theory of vision. Among other contributions, he also explained the camera obscura and catoptrics. He has strongly influenced later scientists like Averroes, Leonardo da Vinci, Galileo Galilei, Christian Huygens, René Descartes, and Johannes Kepler.

Ibn Al-Haytham's full name was Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham. His Latinized name was originally Alhacen; since 1572, when Friedrich Risner misspelled his name, in the West he has been known as Alhazen. He was born and raised in Basra, where he initially worked. Later he worked in Baghdad and Cairo.

For more information on the International Year of Light see here.

Monday, December 22, 2014

Glistenings in pseudophakic vision.

cc Rakesh Ahuja, MD. Aftercataract - Posterior capsular opacification post-cataract surgery (seen on retroillumination)

As we age, our crystalline lens becomes cloudy and we call it a cataract, maybe because the world is seen as from behind a large foaming waterfall. Already the Romans carried out cataract operations 2000 years ago, so the medical remedy is pretty much routine: the cataract is removed surgically and replaced with an intra-ocular lens (IOL). Such an IOL is generally known as a pseudophakic IOL and provides the light focusing function originally undertaken by the crystalline lens.

The ancient Greek word for lens is phakos, so phakia is the presence of the natural crystalline lens. Pseudophakia is the substitution of the natural crystalline lens with an IOL.

One problem of the pseudophakic patient is that sometimes in the weeks or months after the surgical procedure, visual discomfort due to glistenings is experienced. The glistenings are due to micro-vacuoles in the IOL, the vacuoles being part of the polymer's structure. After the IOL has been implanted, water can fill these vacuoles and water has a different refraction index (1.33) than the polymer ~1.55).

What was not known is how much the glistenings impact visual performance. One research technique was to measure the MTF. However, although MTF is related to visual acuity, it is not related to global contrast. and does not explain the visual discomfort. Alessandro Franchini, Andrea Romolo and Iacopo Franchini implemented a ray-tracing program to model and analyze the effect of the vacuoles.

They found that when a light source is in the field of view, without glistenings a clear secondary image is produced, but with glistenings light scattering introduces noise on entire visual field, reducing the global contrast.

The solution is to use hydrophobic acrylic lenses and to keep them in water before implanting them. With this, the IOL will contain 4% water instead of the usual 2%. After the lens is implanted, there will not be the current of liquids that causes the glistenings.

Citation: Alessandro Franchini, Andrea Romolo and Iacopo Franchini, Effect of glistenings on the pseudophakic patient vision, Atti della Fondazione Giorgio Ronchi, Vol. LXIX, N. 5, pp. 589–599.

Thursday, November 20, 2014

Shedding UV light on skin color

For a long time it was believed that dark complexion evolved to protect humans from skin cancer. However, this theory has a flaw: melanoma typically is contracted after the reproductive age. Hence, the traditional theory for complexion cannot be correct because melanoma does not impact evolution.

Recently Nina Jablonski has hypothesized that like chimpanzees, our ancient ancestors in Africa originally had fair skin covered with hair. When they lost body hair in order to keep cool through sweating, perhaps about 1.5 million years ago, their naked skin became darker to protect it from folate-destroying UV light.

Variation in complexion may have evolved to protect folate from UV irradiation

Neural tube birth defects such as spina bifida are linked to deficiencies in folate, a naturally occurring form of vitamin B; Nina Jablonski learned that sunlight can destroy folate circulating in the tiny blood vessels of the skin. Furthermore, lower vitamin D weakens the immune response to the mycobacterium that causes tuberculosis. With this there is a strong evolutionary explanation for complexion variation.

If you live in a Silicon Valley hacker dojo or wear a burka, do not forget your vitamin B pills!

Read the article in Science 21 November 2014: Vol. 346 no. 6212 pp. 934-936 DOI: 10.1126/science.346.6212.934

Friday, November 7, 2014

Sharp has developed a reflective liquid crystal display panel for wearable computer devices, such as smart-watches, that consumes 0.1% the energy of current backlit panels. The Japanese electronics maker will mass-produce the panel in Japan by next spring and ship it to device makers domestically and abroad.

The panels incorporate memory chips and can save power on retrieving data by storing images for a certain amount of time. Panels account for more than 30% of the power usage of current wearable devices. With the new model, devices will be able to function for 30% longer before recharging.

Nikkei Asian Review, 31 October 2014

Wednesday, October 22, 2014

LEGO-inspired microfluidic blocks from 3D printer

modular fluidic and instrumentation components

Pictured is a microfluidic system assembled from modular components that were fabricated using 3D printing at the USC Viterbi School of Engineering. Krisna C. Bhargava et al. used stereolithographic printing techniques to manufacture standardized, interchangeable, fluidic blocks of about 1 cm3 and assembled them by hand to produce variations of complex 3D circuits. Circuit behavior was predicted using design rules analogous to those used in electronic circuit design, and alleviated design limitations imposed by 2D circuit designs.

Microfluidic systems promise to improve the analysis and synthesis of materials, biological or otherwise, by lowering the required volume of fluid samples, offering a tightly controlled fluid-handling environment, and simultaneously integrating various chemical processes (applications include DNA analysis, pathogen detection, clinical diagnostic testing and synthetic chemistry). To build these systems, designers depend on microfabrication techniques that restrict them to arranging their designs in two dimensions and completely fabricating their design in a single step. This study introduces modular, reconfigurable components containing fluidic and sensor elements adaptable to many different microfluidic circuits. These elements can be assembled to allow for 3D routing of channels. This assembly approach allows for the application of network analysis techniques like those used in classical electronic circuit design, facilitating the straightforward design of predictable flow systems.

The authors devised computer models for eight modular fluidic and instrumentation components (MFICs), each of which would perform a simple task. They said that their work in developing these MFICs marks the first time that a microfluidic device has been broken down into individual components that can be assembled, disassembled and re-assembled repeatedly. They attribute their success to recent breakthroughs in high-resolution, micron-scale 3D printing technology.

Krisna C. Bhargava, Bryant Thompson, and Noah Malmstadt, Discrete elements for 3D microfluidics, PNAS 2014 111 (42) 15013-1501, doi:10.1073/pnas.1414764111

Wednesday, October 8, 2014

New reflective LCD panel slashes power consumption

Sharp has developed a touch panel that can accurately pick up text written with a ballpoint pen. This sets the device apart from conventional ones that require a stylus with a wider tip. The new liquid crystal display panel has roughly four times the sensitivity of hitherto existing models, making it the world's top performer in that regard, according to Sharp. The Japanese electronics company plans to start domestic mass production of the device next spring and make it a mainstay product in its LCD business.

Sharp's new panel works with ballpoint pens and mechanical pencils with 1mm or even narrower tips. The company's current production lines can make the panel.

Source: Nikkei

Friday, September 26, 2014

Unique hues

Color does not exist in nature, it is an illusion happening in our mind. Therefore, we can not solve color appearance problems with physics alone. The situation gets even hairier when we converse about color, because color names are just an ephemeral convention. For example, it is easy to "synonimize" blue and cyan or red and magenta.

Especially the latter is not out of the blue. Back in the days when I was doing research in color science, I had an OSA-UCS atlas. Periodically, I was getting updates of color tiles. One tile was standing out: the one that Boynton and Olson had determined to be the best red, with coordinates (L, j, g) = (–3.6, 1.5, –6.9), representing the consensus for the basic color term "red." Not only was it changing, but is was changing frequently. When I lined the updates up in the light-booth, the best red was shifting towards magenta.

I was so puzzled about the OSA changing the tile for the best red, that I called them up. I found out that they periodically where getting new tiles from their supplier Dupont when they developed new pigments. The OSA would then scale them visually and emit updates to the atlas. Therefore, the shift of basic red was a sign that industry was getting better at creating what humans consider to be the best red, and the hue was shifting from yellow towards magenta.

This indicates that the above synonymity is not arbitrary.

Art teachers are the ones most victimized by the non physicality of color, especially when they have to teach the difference between additive and subtractive color, as I wrote in "Is it turquoise + fuchsia = purple or is it turquoise + fuchsia = blue?" a few years ago. Recently I received the following question: do you have any ideas as to how I can prove that red, green, yellow and blue are the unique hues?

It is not that Goethe's yellow-red-blue (today we would write yellow-magenta-cyan) primary theory is wrong. His 1810 Farbenlehre and his battle against Newton with the theory of red-green-blue primaries was a little weird, especially considering Newton (1642–1727) had long been dead when Goethe started fighting against him.

Go back to the 1470s in Florence. The 1453 collapse of the Roman Empire of the Orient (Byzantium) had generated an influx of scientists fleeing from Constantinople to Venice, who took along their over 1500 year old libraries. Aldo Manuzio, exploited this trove of works and cheap scientists to create the new profession of "publisher" and popularized this knowledge that had gone lost in the west. This lead to the Renaissance in Italy, which essentially consisted in taking a field, studying all books about it, and formulating a theory about this field, which can be used to apply it over and over. An example was Niccolò Machiavelli studying politics and creating the field of political science.

At that time, somebody we would now call "engineer" did not go to study at a university but to a place called a "bottega" or "workshop" in English. One of the most famous ones in Florence at that time was the Bottega del Verrocchio, run by Andrea di Michele di Francesco de' Cioni. Verrocchio was his nickname and can be translated as "true eye."

In the pictorial arts, at that time the hot trend was to use very vivid colors and to try to reproduce reality as faithfully as possible. In Verrocchio's bottega, students would learn about pigments and binders and research the creation of new paints to realize highly vivid colors.

One of these students was a lad from the village of Vinci, called Leonardo. Leonardo was passionate about this quest for realism and would dissect cadavers at the hospital to learn how muscles and tendons worked anatomically. For color, he invented a new method to analyze a scene consisting in viewing it through a series of stained glasses (a common novelty in the street markets of the 1470s). Today we would call this a spectroradiometer.

From his studies, Leonardo da Vinci formed a new theory that color is not a physical quality mastered by studying pigment admixtures, rather it is a new phenomenon he called "perception." Perception is the capacity to have a sensate or sentient experience, what philosophers call quale (the plural is qualia). Quale is what something feels like; for example, the smell of a rose or the taste of wine as specific sensations or feelings. We experience the world through qualia, but qualia are not patterns of bits in memory, far from it.

From this he developed a methodology to paint the light illuminating a scene instead of the scene itself. The technique he developed was to paint in many layers with very little pigment. He called this technique "sfumato" and the result "sfumatura."

Working with the spectroradiometer, he realized that colors are not a discrete set with the names of pigments. Rather it was a set of opponents. Doing his drawings he realized, that white and black are actually colors (in 1475 they were not considered to be colors) and they are opposites he called "chiaro" and "scuro", resulting into a technique he called "chiaroscuro."

Once you realize colors are not related to pigments, you have to come up with a representation based on perception. This is where language comes in. When describing a color in 1475, you would say for example "this color is a vermilion with a hint of lapislazzuli" to describe a certain pink. So in 1475 the primary colors were the pure pigments, like oltremare (lapislazzuli), azzurro (azzurrite), indaco, tornasole, vermilion, gold, etc.

Some pigments like lapislazzuli were very expensive and others like azzurrite were more affordable. Admixture would allow you to obtain a given color with a cheaper paint that elicits the same perception. Therefore, Leonardo concluded that the old naming scheme is not useful. To find a representation, he then set out to find if there are any perceived colors that you would never describe as an admixture of two or more colors. This lead him to red, green, yellow, blue, black and white, which are his unique hues.

Further, he discovered that you would never use descriptions like reddish green, bluish yellow, or dark white. This is what he called the "color opponent system" and allowed him to describe perceived color as points in a 3-dimensional space. This is the basis for CIELAB and for the NCS color atlas. For the Munsell color tree there is an extra step for purple.

Leonardo da Vinci’s opponent colors, rendered here using his chiaro-scuro technique to suggest 3-dimensionality. Left: view from top. Center: view from slight left. Right: view from slight right.

With this you have the answer to the above question: you have to distinguish between the creation of color with colorants and the perception of color.

Friday, September 12, 2014

Towards a cure for macular degeneration

In macular degeneration capillaries grow out of control under the retina

Japanese researchers say they have conducted the world's first surgery using iPS cells, on a patient with a macular degeneration. The operation is seen as a major step forward in regenerative medicine. A team led by Masayo Takahashi from a RIKEN research lab in Kobe performed the operation on Friday with the cooperation of a team from the Institute of Biomedical Research and Innovation.

This is a simulation of what a person with macular degeneration might be seeing

The patient was a woman in her 70s with age-related macular degeneration, which involves a progressive decline in vision. The researchers obtained a small amount of the patient's skin cells and turned them into induced pluripotent stem cells. Using the iPS cells' ability to develop into any kind of body tissue, the team then transformed them into retinal tissue.

Patch of retinal tissue grown from the patient's iPS; this patch replaces a removed degenerated patch of the retina

Part of the patient's deteriorating retina was then surgically replaced with the iPS-derived tissue. The patient reportedly came out of anesthesia after being under for approximately 3 hours. The researchers told reporters the patient is recovering well in a hospital room. They added there has been no excessive bleeding or other problems.

Yasuo Kurimoto of the Institute of Biomedical Research and Innovation said he believes the surgery was successful. Masayo Takahashi of the RIKEN lab said she's relieved the surgery was completed safely. She added that although she wants to believe the first clinical case was a major step forward, much more development is needed to establish iPS surgery as a treatment method.

The researchers say the primary objective of the operation was to check the safety of the therapy. They say that since the patient has already lost most of her vision-related cells, the retinal transplant would only slightly improve her eyesight or slow its loss. But the researchers say the therapy could become a fundamental cure if its safety and efficacy can be confirmed by the transplant.

They plan to monitor the patient over the next 4 years. iPS cells were developed by Kyoto University Professor Shinya Yamanaka, who was awarded the 2012 Nobel Prize in Physiology or Medicine. This first-ever use of such cells in a human patient is seen as a major step forward for regenerative medicine — a kind of therapy aimed at restoring diseased organs and tissue.

Source: http://www3.nhk.or.jp/nhkworld/english/news/20140912_53.html

Tuesday, September 2, 2014

You only see what you want to see

Scientists sometimes have funny ways to name entities. Laymen then do not know if the topic is serious or their leg is being pulled. For example, in high energy physics the types of quarks are called flavors, and the flavors are called up, down, strange, charm, bottom, and top.

Molecular biologists tend to have even more bizarre ways to name their entities. For example, in wet color science to study top-down modulation in the visual system they may breed loxP-flanked tdTomato reporter mice with parvalbumin-, somatostatin-, or vasoactive intestinal peptide-Cre positive interneuron mice.

But then, these wet experiments in physiological research are very difficult and tedious. In practical color science, we mostly take a bottom-up approach, which most of the time works acceptably in engineering terms, but then can fail miserably in corner cases. More complete models are possible only when we take into account top-down processes, because in the visual system most information is transmitted top-down, not bottom-up.

Building models is a creative process in which one can easily get carried away, so in color science we have always to question the physiological basis for each model we propose. It is this physiological research that is very difficult. Recently a team from the University of California, Berkeley and Stanford University here in Palo Alto (Siyu Zhang, Min Xu, Tsukasa Kamigaki, Johnny Phong Hoang Do, Wei-Cheng Chang, Sean Jenvay, Kazunari Miyamichi, Liqun Luo and Yang Dan) have accomplished such a feat.

We often focus on a particular item out of a thousand objects in a visual scene. This ability is called selective attention. Selective attention enhances the responses of sensory nerve cells to whatever is being observed and dampens responses to any distractions. Zhang et al. identified a region of the mouse forebrain that modulates responses in the visual cortex. This modulation improved the mouse's performance in a visual task.

Before the work of Zhang et al., the synaptic circuits mediating top-down modulation were largely unknown. Among others, because long-range corticocortical projections are primarily glutamatergic, whether and how they provide center-surround modulation was unknown.

To examine the circuit mechanism of top-down modulation in mouse brain, Zhang et al. first identified neurons in the frontal cortex that directly project to visual cortex by injecting fluorescent latex microspheres (Retrobeads) into V1. They found numerous retrogradely labeled neurons in the cingulate area. To visualize the axonal projections from cingulate excitatory neurons, they injected adeno-associated virus [AAV-CaMKIIα-hChR2(H134R)-EYFP] into the cingulate.

Center-surround modulation of visual cortical responses induced by Cg axonstimulation after blocking antidromic spiking of Cg neurons

They discovered that somatostatin-positive neurons strongly inhibit pyramidal neurons in response to cingulate input 200 μm away. That they also mediate suppression by visual stimuli outside of the receptive field suggests that both bottom-up visual processing and top-down attentional modulation use a common mechanism for surround suppression.

Citation and link: Long-range and local circuits for top-down modulation of visual cortex processing. Siyu Zhang, Min Xu, Tsukasa Kamigaki, Johnny Phong Hoang Do, Wei-Cheng Chang, Sean Jenvay, Kazunari Miyamichi, Liqun Luo, and Yang Dan Science 8 August 2014: 345 (6197), 660-665. [DOI:10.1126/science.1254126]

Saturday, August 9, 2014

Photon Hunting in the Twilight Zone

Deep in the twilight zone of the ocean, small, glowing sharks have evolved special eye features to maximize the amount of light they see, researchers report this week in PLOS ONE. The scientists mapped the eye shape, structure, and retina cells of five deep-sea bioluminescent sharks, predators that live 200 to 1000 meters deep in the ocean, where light hardly penetrates.

The sharks have developed many coping strategies. Their eyes possess a higher density of rods than those of nonbioluminescent sharks, which might enable them to see fast-changing light patterns. Such ability would be particularly useful when the animals emit light to communicate with one another. Some species also have a gap between the lens and the iris to allow extra light in the retina, a feature previously unknown in sharks.

Claes JM, Partridge JC, Hart NS, Garza-Gisholt E, Ho H-C, et al. (2014) Photon Hunting in the Twilight Zone: Visual Features of Mesopelagic Bioluminescent Sharks. PLoS ONE 9(8): e104213. doi:10.1371/journal.pone.0104213

In the eyes of lantern sharks (Etmopteridae), the scientists discovered a translucent area in the upper socket. The researchers suspect this feature might help the sharks adjust their glow to match the sunlight for camouflage.

I wonder if we computer nerds will evolve our visual system similarly.

Citation (Open Access):

Claes JM, Partridge JC, Hart NS, Garza-Gisholt E, Ho H-C, et al. (2014) Photon Hunting in the Twilight Zone: Visual Features of Mesopelagic Bioluminescent Sharks. PLoS ONE 9(8): e104213. doi:10.1371/journal.pone.0104213