قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ How AI Changes Photography

How AI Changes Photography



If you're wondering how good the camera is for your next phone, it would be wise to pay attention to what the manufacturer has to say about AI. Beyond over-sensation and noise, the technology has allowed stunning advances in photography over the past few years and there is no reason to think that progress will be slow

There are still many tricks around to be sure. But the most impressive accomplishments in the field of photography have happened with software and silicon rather than the sensor or the lens – and this is largely due to the fact that AI gives a better understanding of the cameras.

Google Photo A clear demonstration of how powerful AI and photography are when the app runs in 201

5. Previously, the search giant has been using Google+ image categorization machine training for years, but launching the app for Photos included user-service AI features that would be unimaginable for most. The disorganized libraries of thousands of unfamiliar images were transformed into nighttime search databases.

Suddenly, or so it seemed, Google knew what your cat looked like.


Photo by James Barrett / The Verge

Google builds its previous acquisition work for 2013, DNNresearch, through the creation of a deep neural network trained for data that has been labeled by people. This is called controlled education; the process involves training the millions of images network so that it can search for visual traces at the pixel level to help define the category. Over time, the algorithm becomes better and better in recognizing, for example, the panda because it contains the patterns used to properly identify pandas in the past. He learns where black skin and white skin tend to relate to each other, and how it differs from the hidden cow from Holstein, for example. With further training, it is possible to look for more abstract terms like "animal" or "breakfast" that may not have common visual indicators but are still obvious to people.

It takes a lot of time and computing power to train such an algorithm, but once the data centers do their job, they can run on low power mobile devices without much trouble. Heavy lifting is already done, so once your photos are uploaded to the cloud, Google can use your model to analyze and label the entire library. About a year after Google Photos was released, Apple announced a photo search feature that was similarly trained in a neural network, but as part of the company's privacy commitment, the actual categorization is done separately on the processor of each device without sending data. This usually takes a day or two and happens in the background after the setup.

Intelligent photo management software is one, but AI and machine training are likely to have a greater impact on how images are captured first. Yes, the lenses continue to get a bit faster and the sensors can always get a little bigger, but we are already pushing for the limitations of physics when it comes to filling up optical systems on thin mobile devices. However, it is not uncommon for phones to make better shots in some situations than many special camera facilities, at least before post-processing. This is because traditional cameras can not compete with another category of hardware that is equally deep for photography: chip systems that contain a processor, image processing processor and, increasingly, neural processors (NPUs) .


This is the hardware used for what is known as computing, a broad term that covers everything from false field effects in portrait modes of telephones to algorithms that help unbelievable Google Pixel image quality Not all computational pictures include AI, but AI is certainly a major component of it.

Apple uses this technology to manage a dual camera portrait mode. The iPhone signal processor uses machine training techniques to recognize people with one camera while the second camera creates a depth map to isolate the subject and blur the background. The ability to recognize people through machine learning is not new when this feature debuted in 2016 as it is the photo organization's software it already makes. But to run it in real time with the speed needed for the smartphone's camera, it's a breakthrough.

Google remains the obvious leader in this area, however, with the superb results produced by the three generations of Pixel as the most compelling evidence. HDR +, the default capture mode, uses a sophisticated algorithm that combines several unexposed frames into one, and as Google's computer photography Mark Levoy noticed in machine learning means the system improves over time. Google has trained its AI on a vast array of labeled photos, as with the Google Photos software, and this further enhances exposure to the camera. Pixel 2 has created an impressive level of output image quality that some of us in The Verge were more than comfortable to use for professional work on this site.

But Google's advantage never looked as sharp as a few months ago with the release of Night Sight. The new Pixel feature combines long exposures and uses a machine learning algorithm to calculate the more accurate white balance and color, with frankly amazing results. The feature works best on Pixel 3, as algorithms are designed with the latest hardware, but Google makes it available to all Pixel phones – even the original that does not have optical image stabilization – and this is a stunning ad about how The software is now more important than the camera hardware when it comes to mobile photography.


However, there is still room for hardware to do something different, especially when backed by AI. The new View 20 of Honor, along with Nova 4 of Huawei's parent company, are the first to use the Sony IMX586 image sensor. It is a bigger sensor than most competitors and at 48 megapixels represents the highest resolution still visible on every phone. But it still means stacking very small pixels in a small space, which is usually problematic for image quality. In the View 20 tests, however, Honor's AI Ultra Clarity mode excels in maximizing resolution by decoding the unusual color filter on the sensor to unlock additional details. This leads to huge photos that you can zoom in for days.

Image signal processors are important for camera operation for some time, but NPU seems to take a larger role than computing photography. Huawei was the first company to announce a chip system with dedicated hardware AI, Kirin 970, although Apple's A11 Bionic came to users. Qualcomm, the largest Android processor provider in the world, has not yet attracted machine learning, but Google has developed its own chip called Pixel Visual Core to help with image processing related to AI. The latest Apple A12 Bionic, meanwhile, has an eight-core nerve engine that can perform tasks in Core ML, Apple's machine training frame, up to nine times faster than A11, and is first directly connected to the image processor . Apple says this gives the camera a better understanding of the focal plane, for example, helping to generate a more realistic depth of field.

This type of hardware will become more and more important for efficient and perfect machine training of the device, which has an extremely high ceiling in terms of processor requirements. Remember, the algorithms that feed Google Photos are trained on huge, powerful computers with powerful graphics processors and tensor cores before they are released into the library. Much of this work can be done "in advance", so to speak, but the ability to perform machine learning calculations on a mobile device in real time remains cutting.

Google has shown some impressive work that can reduce the weight of processing while nerve engines are getting faster with the year. But even at this early stage of computational photography, there are real benefits that can be found from the camera chambers that are designed around machine training. In fact, of all the possibilities and applications raised by the AI ​​hype wave of the past few years, the area with the most practical use today is perhaps photography. The camera is an essential feature of any phone and AI is our best shot to improve it.


Source link