Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Engineering graduate student narrows gap between high-resolution video and virtual reality

06.02.2009
With their immersive 3D capabilities, virtual-reality environments (VEs) provide the kind of intense visual experience that two-dimensional digital televisions could never to live up to. But digital TVs outperform VEs in one important way: They can play high-resolution video in real-time without a hitch, while VEs have trouble rendering the data-heavy video clips at a constant frame rate.

University of California at San Diego grad student Han Suk Kim is trying to narrow that performance gap so that VEs can one day be used for high-resolution video conferencing, video surveillance or even in virtual movie theaters. Kim, a computer science and engineering Ph.D. student at the Jacobs School of Engineering, has developed an efficient “mipmap” algorithm that "shrinks" high-resolution video content so that it can be played interactively in VEs. He has also created several optimization solutions for sustaining a stable video playback frame rate, even when the video is projected onto non-rectangular VE screens.

Kim will showcase his work during the student poster session at the Jacobs School of Engineering's Research Expo on Thursday, Feb. 19. Kim's research will be one of 240 grad student projects presented at Research Expo. For a sneak peak of some of Research Expo’s hottest student posters click here. Research Expo also includes technical breakout sessions by Jacobs School faculty, as well as a luncheon featuring keynote speaker Christopher Scolese, NASA’s acting administrator.

According to Jürgen Schulze – Kim's advisor and a project scientist at the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2) – the algorithm that Kim has developed will make it possible to display super high-resolution 4K video in Calit2’s virtual auditorium.

"This ability does not only add more realism to architectural models, but it also allows us to use the super high resolution environments to their capability, for instance, in Calit2’s virtual reality StarCAVE, which can display 34 million pixels,” Schulze said. “Another feature of Kim’s algorithm is that dozens of video streams can be displayed concurrently, in different places of the virtual environment, with a much smaller impact on overall rendering performance than previously possible.”

Kim derived his algorithm from a technique called "mipmapping," which is widely used by computer graphics experts to design computer games, flight simulations and other 3D imaging systems. The term itself is indicative of its approach – the letters "MIP" are short for the Latin phrase multum in parvo, which means "much in a small space." By using mipmap to reduce the level of detail (LOD) and downscale the size of high-resolution media datasets (like Calit2's 4K video of a tornado simulation, which numbers almost 45 gigabytes in volume), Kim was able to stream the video in real-time at 25 frames per second. By adding various optimizations to allow for constant frame and rendering rates, Kim was able to rotate, zoom in and otherwise manipulate the video playback screen to make the experience fully interactive.

"In interactive computer graphic applications, each frame takes a different time to render, depending on many different factors, such as the amount of data to be loaded, the size of the rendered screen and cache effect in various places of a computer system," Kim explained. "One of the ideas behind my work is to apply the mipmap approach for multiple frames of video data. The algorithm I developed automatically calculates the correct LOD mipmap scaling depending on the distance away from the eye and the area of each tile shown on screen. We then use the processed data for playback. So I'm not really editing the video. I'm only touching it in terms of resolution.”

Although mipmap is an efficient method for coping with large datasets, there has to be a good playback system to support it, Kim said.

"Our approach reduces the memory required to display high-resolution images, depending on distance and visual perspective,” he said. “If the area is big and close to the viewer's face, the video is streamed at a high-resolution; if it’s small and far away from the viewer's face, it's streamed at a low-resolution.

“Another issue is how to read a lot of data efficiently,” Kim added. “If I want to render a particular frame in the VE, the data has to already be in the system and I have to read it from memory and copy it again to the graphics processor. For the VE, we used pre-fetching to read the data in advance, and to speed up the reading process we used synchronous input-output. If I want to play the video with audio, it has to be synchronized with the sounds, so it has to be played at the correct speed. This means I have to measure the speed of playing, and if it’s too slow, I have to jump a couple of frames."

Kim's research has implications for another important field of research: Biomedicine. UC San Diego’s National Center for Microscopy and Imaging Research (NCMIR) has enlisted Kim to help visualize large datasets culled from brain cells – research that might ultimately play a part in the digital "Whole Brain Catalog” envisioned by UC San Diego’s Mark Ellisman, a professor of neuroscience and bioengineering.

"Visualizing cellular datasets and maximizing high-resolution video playback share similar problems," Kim said. "If you have 10 gigs of data but only 2 gigs of memory, how do you achieve that? NCMIR has a lot of datasets that are usually made up of very high-resolution microscope images, and all the optimization solutions that we used in video playback can also be applied to the rendering system of light microscope and electron microscope data. The next step would be something called ‘transfer function design,’ which adds color to the 3D datasets so scientists can compare them.”

Kim will also publish his preliminary research results as a poster at this year’s Institute of Electrical and Electronics Engineer’s (IEEE) Virtual Reality conference in Lafayette, La. March 14-18.

Andrea Siedsma | EurekAlert!
Further information:
http://www.ucsd.edu

More articles from Communications Media:

nachricht New Technologies for A/V Analysis and Search
13.04.2017 | Fraunhofer-Institut für Digitale Medientechnologie IDMT

nachricht On patrol in social networks
25.01.2017 | Fraunhofer-Institut für Arbeitswirtschaft und Organisation IAO

All articles from Communications Media >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: The pyrenoid is a carbon-fixing liquid droplet

Plants and algae use the enzyme Rubisco to fix carbon dioxide, removing it from the atmosphere and converting it into biomass. Algae have figured out a way to increase the efficiency of carbon fixation. They gather most of their Rubisco into a ball-shaped microcompartment called the pyrenoid, which they flood with a high local concentration of carbon dioxide. A team of scientists at Princeton University, the Carnegie Institution for Science, Stanford University and the Max Plank Institute of Biochemistry have unravelled the mysteries of how the pyrenoid is assembled. These insights can help to engineer crops that remove more carbon dioxide from the atmosphere while producing more food.

A warming planet

Im Focus: Highly precise wiring in the Cerebral Cortex

Our brains house extremely complex neuronal circuits, whose detailed structures are still largely unknown. This is especially true for the so-called cerebral cortex of mammals, where among other things vision, thoughts or spatial orientation are being computed. Here the rules by which nerve cells are connected to each other are only partly understood. A team of scientists around Moritz Helmstaedter at the Frankfiurt Max Planck Institute for Brain Research and Helene Schmidt (Humboldt University in Berlin) have now discovered a surprisingly precise nerve cell connectivity pattern in the part of the cerebral cortex that is responsible for orienting the individual animal or human in space.

The researchers report online in Nature (Schmidt et al., 2017. Axonal synapse sorting in medial entorhinal cortex, DOI: 10.1038/nature24005) that synapses in...

Im Focus: Tiny lasers from a gallery of whispers

New technique promises tunable laser devices

Whispering gallery mode (WGM) resonators are used to make tiny micro-lasers, sensors, switches, routers and other devices. These tiny structures rely on a...

Im Focus: Ultrafast snapshots of relaxing electrons in solids

Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!

When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...

Im Focus: Quantum Sensors Decipher Magnetic Ordering in a New Semiconducting Material

For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature.

Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

“Lasers in Composites Symposium” in Aachen – from Science to Application

19.09.2017 | Event News

I-ESA 2018 – Call for Papers

12.09.2017 | Event News

EMBO at Basel Life, a new conference on current and emerging life science research

06.09.2017 | Event News

 
Latest News

Rainbow colors reveal cell history: Uncovering β-cell heterogeneity

22.09.2017 | Life Sciences

Penn first in world to treat patient with new radiation technology

22.09.2017 | Medical Engineering

Calculating quietness

22.09.2017 | Physics and Astronomy

VideoLinks
B2B-VideoLinks
More VideoLinks >>>