Posted on Feb 7, 2018
Edited on Feb 8, 2018 adding the mask cutout and depth map simulation.
In this article, I want to go a bit more deeply about the Google Pixel 2 portrait mode, showing my samples captured in the last months, and summing up the tech article that you can find at the Google Research page.
One of the absolute unique features of the Google Pixel 2, is the fact that the device is able to achieve a correct and convincing depth of field as on the DSLR or mirrorless camera using only one lens.
I suggest you to give a look at the other related article about the Google Pixel 2 camera:
Google Pixel 2 Portrait mode: how it works
Knowing how the Google Pixel 2 portrait mode works will help you to understand a bit more when the algorithm fails to work.
So basically the Google Pixel 2 portrait mode work in four different phases:
- It starts doing a straight shot of the scene you want to photograph, using the HDR+ algorithm.
I’ve talked about the amazing Google HDR+ already back in 2014 when I’ve tested the Nexus 5, and then again on the Google Nexus 6p and now with the Google Pixel 2.
- On the second step, the processor does a cutout of your subject from the background. In this phase, the Google Pixel 2 portrait mode use a neural network algorithm to understand which is the subject and which is the background. Is able to understand it (or suppose to understand) ’cause it has learned from over thousands of images that the neural network has looked at.
After this step, we have just a “clipping mask” of the subject from the background, but we don’t have any depth information of the scene.
- In this step, the Google Pixel 2 calculate the depth mask using a very smart idea: since all the new generation rear camera phones have a dual pixel split on the sensor to get a fast focusing system (the phase detection), it uses this system to calculate the difference of view of the left side from the right side of the lens.Google admits that the distance from left to right using this method is very small, about 1mm, so for this reason probably, they can’t use only this system to create the final mask but they have to combine it with the cutout done on the point 2.
- At this point, the device is able to combine the cutout mask from the point 2 with the depth mask of the point 3. Google Tech isn’t giving so much information how they combine them (they say it’s a secret sauce !:))The last part of the process is to blur the background using, as a driver, the mask it has created from the previous point.
To create a convincing blur, it replaces the original pixel with a translucent disk of the same color but with varying size (depending on the distance of the mask).
Note: the Google Pixel 2 saves on the memory of the phone both the photos: the sharp and the blurred one. You can go from one to the other in the Google Photo app (the default Gallery app on the Google Pixel phones) and select the one you want to keep.
If you use Lightroom Mobile with the auto add feature turned on, you will automatically import both the picture in your grid view.
The four-step of how works the Google Pixel 2 portrait mode described above, with the cutout and depth map simulated in Photoshop.
The Google Pixel 2 Portrait mode on test
Some of my notes and thoughts are confirmed by the Google Research Blog: since the Google Pixel 2 use only one lens setup to achieve the portrait mode, and the base lens has a 27mm equivalent focal length, the smartphone is doing a crop and then re-enlarge the photo to achieve the 12 Mp final resolution. On the Google Research Blog is said that the device is actually zooming around 1.5x on the rear camera, 1.2x on the front one.
So remember that in portrait mode, actually, you are using a slight digital zoom. This is a different behavior compared to the smartphone that uses a real longer lens to get the proper perspective as the iPhone X/8 and OnePlus 5.
Google Pixel 2 Portrait mode samples
Let’s look at the Google Pixel 2 portrait mode samples that I’ve captured during the last months.
Let’s start to say that my feelings are mixed.
The general quality of the final photo is very good, I love the colors and especially the details on the skin, but I’ve to say that sometimes I feel that the HDR+ algorithm is pushing a bit too hard on the skin tones getting a bit too much orange tones on some shot (but luckily we are far better than the iPhone x/8 orange skin tones we get in some hard light situations!).
Sometimes the algorithm fails with very easy tasks: I could notice that few photos, but more than I was expecting honestly, I could find some hole of a sharp area on the background.
(read furthermore to understand better the reason for this fail )
Subscribe to the Newsletter!
Subscribe to the Newsletter to receive all the updates about the latest article published about smartphone and camera photography + tutorials and video on photography and post processing with Lightroom, Photoshop, Capture One and Affinity Photo!
Situations in which the Google Pixel 2 Portrait mode can fail
Reading more on the Google Tech paper we can learn that the Google Pixel 2 portrait mode can fail in those different situations:
- When the underline base image recorded with the HDR+ algorithm it has some blown out area. In this situation, it could fail to process the cutout mask.
- When we photograph some very unique kind of situations. Google Tech tells as an example “a girl kissing a crocodile”. This is a very unusual situation in which the Google neural network couldn’t have enough information to build the first cutout mask.
- When using flat textureless background or repeating texture or horizontal or vertical lines. There can fail the depth mask calculation.
- When we are a bit too far from the subject. The reason of this is the smaller distance from the left half part of the picture from the right one (as said above, about 1mm). The bigger the distance from of the subject from the lens of the camera, the smaller the difference to compute the depth mask of the subject from the background. The suggestion is to have your subject from 30cm to 2 meters of distance.
- The Google Pixel 2 portrait mode can fail when shooting a not human subject. The portrait mode has been trained to do the second stage of the cutout mask using a neural network instructed on people’s portrait.
For example, if you photograph a flower, the algorithm couldn’t find any person. Then it will use only the depth mask.
Google Pixel 2 Portrait using the front camera
The Google Pixel 2 is one of the first smartphones to be able to achieve the portrait mode also on the front-facing camera.
However, to build the depth map, it uses only the cutout mask (see explain above) ’cause, at this point of the technology in 2018, the front camera doesn’t have the dual pixel autofocus split.
This translate into a correct and working portrait mode that uses only the neural network. It’s a less accurate process since for an object that is closer or further than the focus point, the camera can’t blur it in the proportion of his distance.
Final notes on the Google Pixel 2 portrait mode
It’s interesting to note as Google has implemented a very smart way to produce a working solution to achieve a realistic portrait mode using only one lens.
It’s sure that the company now will work more to correct the first faults we’ve pointed above and we will get in the future better and more accurate way to process the cutout and the depth mask.
Lastly, but this is a general topic for all the smartphone camera producers, there’s to work more to produce a convincing realistic kind of blur. We are quite far from the character of bokeh quality like some legendary lens as the Leica or Zeiss.
I’m still fascinated and confident that one day we will have the opportunity to choose a sort of “lens simulation” with the related quality of bokeh.