Stanford photo scientists are out to reinvent digital photography with the introduction of an open-source digital camera, which will give programmers around the world the chance to create software that will teach cameras new tricks.
If the technology catches on, camera performance will be no longer be limited by the software that comes pre-installed by the manufacturer. Virtually all the features of the Stanford camera – focus, exposure, shutter speed, flash, etc. – are at the command of software that can be created by inspired programmers anywhere. “The premise of the project is to build a camera that is open source,” said computer science professor Marc Levoy.
Computer science graduate student Andrew Adams, who helped design the prototype of the Stanford camera (dubbed Frankencamera,) imagines a future where consumers download applications to their open-platform cameras the way Apple apps are downloaded to iPhones today. When the camera’s operating software is made available publicly, perhaps a year from now, users will be able to continuously improve it, along the open-source model of the Linux operating system for computers or the Mozilla Firefox web browser.
From there, the sky’s the limit. Programmers will have the freedom to experiment with new ways of tuning the camera’s response to light and motion, adding their own algorithms to process the raw images in innovative ways.
Frankencamera at minimal cost
Levoy’s plan is to develop and manufacture the “Frankencamera” as a platform that will first be available at minimal cost to fellow computational photography researchers. In the young field of computational photography, which Levoy helped establish, researchers use optics benches, imaging chips, computers and software to develop techniques and algorithms to enhance and extend photography. This work, however, is bound to the lab. Frankencamera would give researchers the means to take their experiments into the studios, the landscapes, and the stadiums.
For example, among the most mature ideas in the field of computational photography is the idea of extending a camera’s “dynamic range,” or its ability to handle a wide range of lighting in a single frame. The process of high-dynamic-range imaging is to capture pictures of the same scene with different exposures and then to combine them into a composite image in which every pixel is optimally lit. Until now, this trick could be done only with images in computers. Levoy wants cameras to do this right at the scene, on demand. Although the algorithms are very well understood, no commercial cameras do this today. But Frankencamera does.
Another algorithm that researchers have achieved in the lab, but no commercial camera allows, is enhancing the resolution of videos with high-resolution still photographs. While a camera is gathering low-resolution video at 30 frames a second, it could also periodically take a high-resolution still image. The extra information in the still could then be recombined by an algorithm into each video frame. Levoy and his students plan to implement that on Frankencamera, too.
Yet another idea is to have the camera communicate with computers on a network, such as a photo-hosting service on the Web. Imagine, Levoy says, if the camera could analyze highly-rated pictures of a subject in an online gallery before snapping the shutter for another portrait of the same subject. The camera could then offer advice (or just automatically decide) on the settings that will best replicate the same skin tone or shading. By communicating with the network, the camera could avoid taking a ghastly picture.
Of course users with Frankencameras would not be constrained by what is already known. They’d be free to discover and experiment with all kinds of other operations that might yield innovative results because they’d have total control.
"Some cameras have software development kits that let you hook up a camera with a USB cable and tell it to set the exposure to this, the shutter speed to that, and take a picture, but that’s not what we’re talking about," says Levoy. "What we’re talking about is, tell it what to do on the next microsecond in a metering algorithm or an autofocusing algorithm, or fire the flash, focus a little differently and then fire the flash again — things you can’t program a commercial camera to do."
Behind the lens cap
To create an open-source camera, Levoy and the group cobbled together a number of different parts: the motherboard, per se, is a Texas Instruments "system on a chip" running Linux with image and general processors and a small LCD screen. The imaging chip is taken from a Nokia N95 cell phone, and the lenses are off-the-shelf Canon lenses, but they are combined with actuators to give the camera its fine-tuned software control. The body is custom made at Stanford. The project has benefited from the support of Nokia, Adobe Systems, Kodak, and Hewlett-Packard. HP recently gave graduate student David Jacobs a three-year fellowship to support his work on the project. Kodak, meanwhile, supports student Eddy Talvala.
Within about a year, after the camera is developed to his satisfaction, Levoy hopes to have to have the funding and the arrangements in place for an outside manufacturer to produce them in quantity, ideally for less than $1,000. Levoy would then provide them at cost to colleagues and their students at other universities.
The son, grandson, and great-grandson of opticians, Levoy sees his mission as not only advancing research in computational photography, but also imbuing new students with enthusiasm for technology. This spring he launched a course in digital photography in which he integrated the science of optics and algorithms and the history of photography’s social significance with lessons in photographic technique.
As many ideas as Levoy’s team may want to implement on the camera, the real goal is to enable the broader community of photography researchers and enthusiasts to contribute ideas the Stanford group has not imagined. The success of Camera 2.0 will be measured by how many new capabilities the community can add to collective understanding of what’s possible in photography.
David Orenstein is the associate communications director for the School of Engineering.
David Orenstein | EurekAlert!
Patented nanostructure for solar cells: Rough optics, smooth surface
18.09.2018 | Helmholtz-Zentrum Berlin für Materialien und Energie GmbH
With Gallium Nitride for a Powerful 5G Cellular Network - EU project “5G GaN2” started
17.09.2018 | Fraunhofer-Institut für Angewandte Festkörperphysik IAF
Our brain is a complex network with innumerable connections between cells. Neuronal cells have long thin extensions, so-called axons, which are branched to increase the number of interactions. Researchers at the Max Planck Institute of Biochemistry (MPIB) have collaborated with researchers from Portugal and France to study cellular branching processes. They demonstrated a novel mechanism that induces branching of microtubules, an intracellular support system. The newly discovered dynamics of microtubules has a key role in neuronal development. The results were recently published in the journal Nature Cell Biology.
From the twigs of trees to railroad switches – our environment teems with rigid branched objects. These objects are so omnipresent in our lives, we barely...
The Fraunhofer FEP has been involved in developing processes and equipment for cleaning, sterilization, and surface modification for decades. The CleanHand Network for development of systems and technologies to clean surfaces, materials, and objects was established in May 2018 to bundle the expertise of many partnering organizations. As a partner in the CleanHand Network, Fraunhofer FEP will present the Network and current research topics of the Institute in the field of hygiene and cleaning at the parts2clean trade fair, October 23-25, 2018 in Stuttgart, at the booth of the Fraunhofer Cleaning Technology Alliance (Hall 5, Booth C31).
Test reports and studies on the cleanliness of European motorway rest areas, hotel beds, and outdoor pools increasingly appear in the press, especially during...
The building blocks of matter in our universe were formed in the first 10 microseconds of its existence, according to the currently accepted scientific picture. After the Big Bang about 13.7 billion years ago, matter consisted mainly of quarks and gluons, two types of elementary particles whose interactions are governed by quantum chromodynamics (QCD), the theory of strong interaction. In the early universe, these particles moved (nearly) freely in a quark-gluon plasma.
This is a joint press release of University Muenster and Heidelberg as well as the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt.
Then, in a phase transition, they combined and formed hadrons, among them the building blocks of atomic nuclei, protons and neutrons. In the current issue of...
Thin-film solar cells made of crystalline silicon are inexpensive and achieve efficiencies of a good 14 percent. However, they could do even better if their shiny surfaces reflected less light. A team led by Prof. Christiane Becker from the Helmholtz-Zentrum Berlin (HZB) has now patented a sophisticated new solution to this problem.
"It is not enough simply to bring more light into the cell," says Christiane Becker. Such surface structures can even ultimately reduce the efficiency by...
A study in the journal Bulletin of Marine Science describes a new, blood-red species of octocoral found in Panama. The species in the genus Thesea was discovered in the threatened low-light reef environment on Hannibal Bank, 60 kilometers off mainland Pacific Panama, by researchers at the Smithsonian Tropical Research Institute in Panama (STRI) and the Centro de Investigación en Ciencias del Mar y Limnología (CIMAR) at the University of Costa Rica.
Scientists established the new species, Thesea dalioi, by comparing its physical traits, such as branch thickness and the bright red colony color, with the...
21.09.2018 | Event News
03.09.2018 | Event News
27.08.2018 | Event News
26.09.2018 | Trade Fair News
26.09.2018 | Life Sciences
25.09.2018 | Health and Medicine