Skip to main content

Hitachi

Corporate InformationResearch & Development

Hitachi has been engaged in development of "high-picture-quality technology" for TVs since the 1990s. In August 2011, with Hitachi's first three-dimensional (3D)-compatible TV launched in Japan, through improved safety and sense of presence, even higher image quality was targeted.

That which we developed by this improvement process is referred to as "left and right image-balance adjustment technology." By eliminating the difference between the colors of the left- and right-eye images, this technology improves image quality and alleviates eye strain experienced while watching TV. This improvement helps provide comfortable viewing.

3D technology attracting attention again

3D compatibility of household TVs is progressing, isn't it.

Photo: FUKUDA Nobuhiro

FUKUDAThat's right. Since 2010, 3D TVs have been launched one after another by manufacturers. In August 2011, Hitachi started selling 3D plasma TVs in Japan.

While 3D technology itself has been around for a long time, it has lately been starting to draw attention again. That which re-focused attention on 3D was a well-known film director talking about 3D at ShoWest 2005. At that time, attendances at cinemas were steadily declining, and 3D movies were expected to be a means of getting people to go to the movies again. And the excitement generated by 3D in the film industry spilled over into the realm of home TVs.

Tell us about the mechanism that makes it possible to view 3D images sterically.

HASEGAWAA 3D image is composed of two images (namely, an image for the right eye and an image for the left eye) that have a relative "disparity" in the right and left directions. During playback of 3D images, when the right-eye image is shown to the right eye, and the left-eye image is shown to the left eye, the images are integrated in the brain as one. At that time, the disparity between the two images is recognized as depth, and the image appears to be three-dimensional. In other words, the image is shown in three dimensions by creating an optical illusion in the brain.

As shown in Figure 1, as for objects in the center of the image, when the left-eye image is shifted to the right side, and the right-eye image is shifted to the left side, the object appears to jump out in front of the TV screen. In contrast, when the left-eye image is shifted to the left side, and the right-eye image is shifted to the right side, the object appears to be behind the TV screen.

Figure 1: Mechanism for recognizing images in three dimensions

How are the different images shown to the left and right eyes?

Photo: HASEGAWA Minoru

HASEGAWAAs for TVs we involved in development, a technology called "active shutter glasses method" is utilized.

In this method, the right-eye image and the left-eye image are shown alternately on the TV screen for 1/120th of a second. The viewer sees images when wearing special eye glasses with shutters on the lenses that open and close in sync with the display of the TV. By closing the right-eye shutter when the left-eye image is displayed and closing the left-eye shutter when the right-eye image is displayed, this mechanism shows only the left-eye image to the left eye and only the right-eye image to the right eye.

Figure 2: Mechanism of active-shutter glasses method

Searching for peculiar problems associated with 3D images

What kind of peculiar challenges are associated with 3D images?

FUKUDAOne challenge is devising a method for producing a sense of presence. As for a 3D sense of presence, three-dimensionality generated by the disparity of the right- and left-eye images is the decisive factor. We feel the need to further emphasize that three-dimensionality. Accordingly, we proposed incorporating the perspective often used in paintings into existing image-processing technology.

When human sees scenery, distant objects appear blurred, and things in front of the observer appear sharp. By incorporating this characteristic way of viewing things into image processing, it is possible to get three-dimensionality and sense of presence of images. This technique—called "Pixel Manager EX"—was mainly developed by members of Hitachi's Central Research Laboratory.

Photo: SAKANIWA Hidenori

SAKANIWA"Television" originally meant "viewing things from a distance". Consequently, while we are, naturally, aiming to find a way of viewing distant objects so that they appear to be distant, we are also aiming to show things so that they appear "more real than the real thing" in view of the way people see things. For example, from the viewpoint of Japanese people, the color of cherry-blossom petals is imagined as "pale pink." In fact, even though the color is more like whitish pink, the way it is combined with colors memorized by people perceives it to be prettier and more real than it actually is.

How to show images is very important, isn't it.

FUKUDAYes, it is. It's true for 3D images as well as for 2D images. Hitachi has been continuously involved in developing high-picture-quality technologies since the 1990s. Two examples of such developments are "frame-rate conversion technology" and "clarity-enhancement technology". The results of these developments are reflected in successive TVs released by Hitachi.

SAKANIWAThe high-picture-quality technology incorporated in the 3D-compatible TVs launched in Japan since August 2011 comes from the amassed results of our research efforts. Before the development of 3D TV was start, we had been studying what sort of technologies would be needed, and how to extract better three-dimensionality.

Photo: HASEGAWA Minoru

HASEGAWAIn fact, we also looked at 3D images and dug up problems. At that time, there were few kinds of 3D contents available, so our investigations were painstaking. Utilizing 3D-compatible digital cameras and web cameras, we shot 3D images.

That which we found during these investigations was another issue, namely, safety concerns while gazing at 3D images. Since 3D has grabbed attention in the past, eye strain accompanying viewing of 3D images has caused a great deal of trouble. Although there are many causes of eye strain, we focused on the difference in the colors of the images for the left and right eyes. When we were investigating 3D images, we became fairly concerned about difference in color tints. This concern motivated us to develop a kind of high-picture-quality technology named "left and right image-balance adjustment technology."

Eliminating the color difference between left and right images

Explain the reason that the difference between the colors of the left- and right-eye images is connected with eye strain.

HASEGAWAThe left- and right-eye images used in 3D imaging are captured with separate cameras. Differences inherent in the optical systems of the cameras therefore make it difficult to capture images with exactly the same color tint. As a result, a difference between the colors of the left- and right-eye images is generated.

If this color difference is present, the brain becomes confused in judging whether the two images are really the same image. In other words, it becomes difficult to integrate the two images in the brain. This confused state is referred to as "binocular rivalry," which leads to health issues such as eye strain when prolonged for a long period.

Figure 3: Binocular rivalry

SAKANIWAAttention was drawn to the health issues brought on by viewing 3D images quite some time ago. Hitachi has been actively involved in efforts to establish safety. For example, we were involved in drawing up ISO standards (ISO 9241-300 series) covering safety of 3D images, and safety guidelines set out by a body called the 3D Consortium. In the safety guidelines, guiding principles considering children of six years old and under viewing 3D images are laid out. The reasoning behind those principles is that 3D images may hinder the development of the eyesight of children in that age bracket.

What sort of things do you do to eliminate the color difference?

FUKUDABy means of left and right image-balance adjustment technology, the color of the right-eye image is corrected in accordance with the left-eye image. How much of a certain color is used for each image is analyzed, and the analysis results are converted into numerical values for each image. In that manner, the difference between the left- and right-eye images can be understood in numerical terms. Moreover, by adjusting the left- and right-eye images so that their sets of color information conform, it is possible to eliminate the difference between the colors of the images.

Figure 4: Mechanism of left and right image-balance adjustment technology

SAKANIWAHowever, it is not good that the color information is completely harmonized. Since there is a disparity in the area in which the left- and right-eye images on the screen, if a green ball is shown at the edge of the left-eye image, it does not appear in the right-eye image. In that case, if the right-eye image is adjusted in accord with the amount of green in the left-eye image, the color tint changes.

FUKUDAThat said, we have devised the structure that determines the amount of correction by taking into account the part containing the color-tint error. Determining the correction amount was quite a struggle over a month or so. There were many things we didn't understand by looking at numbers only. So we performed evaluations by looking at actual images. And we adjusted the correction amount several times on the basis of the evaluation results.

Method for processing images in real time

This correction of color tint can be done by a TV in a moment, right.

Photo: SAKANIWA Hidenori

SAKANIWAYes. It is designed to allow processing in real time. However, since left- and right-eye images exist in the case of 3D images, the data volume that must be processed is double that for 2D images. So, that correction process is no simple matter.

What's more, to make the color-tint correction easy, the "color space" of the images is converted from red, green, and blue to one containing hues, so the data volume further increases. Though the number of gradation for RGB color space is 256 for each color, that for HSV color space is several times that number to keep the gradation for RGB space. As the number of gradation increases, data amount also increases. Accordingly, the method that can process a lot of data at one time is needed.

FUKUDAAs for processing a large amount of data at one time, processing speed becomes vital. The processing is therefore done by hardware called a field-programmable gate array, or FPGA for short.

A FPGA is, however, an expensive piece of hardware. That means as circuit scale increases, costs also increase. Since the cost of components is reflected in the price of a TV, circuit scale should be reduced in order to keep down costs. It would be a shame to produce a TV that has long-awaited picture quality but is too expensive for anyone to afford. Accordingly, we have tried to process data volumes more than double conventional ones in real time while keeping the circuit scale as small as possible. This challenge was most difficult to meet when commercializing TVs.

How did you meet this challenge?

SAKANIWAWe split the data processing so that parts that must be processed quickly are done by hardware and the other parts are done by software. It is necessary to split processing so that the amounts of data between hardware and software are limited.

Various kinds of processing, such as showing menu screens and volume adjustment of the sound, are done by software. Given that fact, image-analysis processing with little communication traffic with hardware is performed by software.

Figure 5: Flow of image-correction processing by left and right image-balance adjustment technology

Executing a part of the processing by software makes it possible to reduce circuit scale by one third to half in comparison to the case that all processing is done by hardware. What's more, it is a great help that processing executed by software makes later fine tuning easier. Although the circuits of the FPGA can also be rewritten, changes them at a later date is not easy.

Future image processing

How will image-processing technologies evolve from now onwards?

Photo: FUKUDA Nobuhiro

FUKUDAThe left and right image-balance adjustment technology that we developed corrects color tint when images are displayed. By applying this technology to a 3D camera, it became possible to correct color tint at the same time as images are shot. Moreover, this kind of technology can fit the color tints of multiple images, so it is applicable to production of composite images.

In the future, higher resolution images (such as "4K2K") and images from multiple viewpoints (for autostereoscopic 3D television) will appear. In line with that trend, the amount of data that will have to be processed in real time is set to rapidly increase. I feel it will thus be necessary to advance research on improving not only image quality but also efficiency of data processing.

Research goals are broadening in accord with world trends, right?

HASEGAWAYes, they are. Until now, TV was the main imaging device, but from now on, tablet terminal will become the main one. Among those trends, different challenges are unearthed, and the challenges will be faced one by one. When the image output device is changed, technical challenges change too. Accordingly, I think we must be sure to follow world trends closely.

SAKANIWASince the amount of information that is contained in images is huge, I think how to extract the required information and how to clearly show the extracted information are also important points. Since the brain determines what an image looks like and what sensation it evokes, future research on image processing will probably approach elucidating the recognition mechanisms in the brain.

FUKUDAThe image-processing technologies that we have accumulated up until now are being applied in fields related to automobiles, IT, and so on as well as to TVs. From now into the future, by getting involved in these fields too, I want to create technologies that have been unavailable up till now. Doing so, in line with the mission of me and my fellow researchers, we can continue to contribute to humanity and society through technology.

(Publication: April 24, 2012)

Notification

  • Publication: April 24, 2012
  • Professional affiliation and official position are at the time of publication.
  • Page top

Related contents

  • Page top