Archimedes of Syracuse famously claimed "give me a place to stand on, and I will move the Earth." Understanding the principle of the lever is key for being successful in technology.
Wednesday, May 30, 2012
Tuesday, May 29, 2012
Isatis tinctoria is a flowering plant in the family Brassicaceae, from whose leaves a blue dye is produced (the flowers are yellow). It is commonly known as woad or Asp of Jerusalem. The common names in Italian, German, and French are guado or gualdo, Färberwaid, respectively pastel des teinturiers or guède. It was used with weld (for yellow) and madder (for red) to produce full color images from at least the Neolithic Age, when it was introduced in Europe.
With the European discovery of the seaway to India, woad was replaced with indigo or Indigofera tinctoria, which contains the dye in higher concentration. Today, synthetic indigoes are used. However, Isatis tinctoria still has use in some inkjet inks because it is biodegradable.
Monday, May 28, 2012
From humans to insects, color and motion information are thought to be channeled through separate neural pathways for efficient visual processing, but it remains unclear if and how these pathways interact in improving perception of moving colored stimuli. By using sophisticated Drosophila genetics, intracellular electrophysiology, two-photon imaging, and behavioral experiments, Trevor Wardill et al. found that early in the processing stage, color photoreceptors influence the motion pathway and that this input improves the flies' optomotor performance in a flight simulator.
Read the paper Multiple Spectral Inputs Improve Motion Discrimination in the Drosophila Visual System in Science, 18 May 2012: Vol. 336 no. 6083 pp. 925-931 DOI: 10.1126/science.1215317.
Thursday, May 10, 2012
Words displayed in large fonts elicit stronger emotional responses, according to a new study. Mareike Bayer, Werner Sommer, and Annekathrin Schacht measured brain activity in 25 adults while showing them 72 emotionally positive, negative, and neutral words. They included words for gift, death, and chair. The team displayed the words in either 28-point or 125-point Arial font. Volunteers displayed stronger emotion-related brain activity 10 milliseconds earlier for the larger font size versus the smaller one, the authors reported online yesterday in PLoS ONE (open access). What's more, emotional signals elicited by the larger font size lasted a total of 180 milliseconds longer. The results are similar to emotional responses to large and small versions of pictures with fearful, disgusting, or sex-related content. Pictures hold biological relevance for people, since a big photo of a predator probably signals proximity to you. Similar emotional effects on font sizes probably reflect the importance language holds in our society, the authors speculate.
Wednesday, May 9, 2012
In the mid-80s, when we were building up the new digital color research area at PARC, we invited as many color scientists active in digital reproduction as we could. We had two motives: scope the field and identify potential candidates to hire.
One visitor stood out for his deep knowledge both in color vision science and in the mathematical modeling of reproduced color. With his tall slender figure he was immediately recognizable, and his soft voice was very determined in technical discussions. He also had a unique curriculum vitæ: he developed the avionics color displays for Honeywell and Boeing while never changing location; his employer would change, but his lab remained intact. While others were trying to characterize CRTs, he build a complete model for LCD displays; without him, we would not have high fidelity color LCD displays today.
Dr. Louis D. Silverstein has died at age 61. Farewell Lou!
I has been almost a year since we posted on Maryam's color semiotics research.
Everyone is talking about it these days, but where can we find a certain rule or framework which defines it? What parameters are involved? Is this framework useful for designers? Can it be communicated? Can its variation be modelled?
Last January 23rd Maryam reached the 1800th response to her survey. This is the last chance for your participation before her ultimate analysis. Please take to survey now at https://www.keysurvey.co.uk/votingmodule/s180/survey/365495/1a02/
Monday, May 7, 2012
We know very little about the physiology of color vision. For example, in the case of dichromatic color vision, in the past the trick was to have a rolodex of unattached dichromates, volunteering as an emergency physician, and when an expired dichromat showed up in the ER, there was an hour's time for wet color science.
Recently, a lot of progress has been made in color vision physiology by leveraging state-of-the-art equipment. A couple of months ago we reported on Kathy Mullen's breakthrough leveraging a new fMRI scanner in Australia. Today we cross the big pond and look at an application of big data technology.
Michele Fiscella harvests the retina of a mouse, places it on a MEA chip, sprinkles it with a nourishing fluid, and for the couple of hours the retina remains functional, projects patterns on the retina and captures the interneural traffic.
MEA—for Micro Electrode Array—is a technology from the lab of Prof. Andreas Hierlemann. On a surface of 3.6 mm2, 11,011 elliptical microelectrodes probe the retina. On the average, for each neuron there are 14 microelectrodes, and each one delivers 20,000 measurements per second. The hope is to sleuth what information is passed bottom up from the retina to the LGN.
To do that, the retina has to be placed upside down on the MEA, i.e., with the retinal ganglions on the microelectrodes and the photoreceptors towards the stimulus source. This is not necessarily contra naturam, as there is nothing coming top down from the LGN, which is not part of the experiment.
Analytical engines, here the big data come…