Samsung ISOCELL Duo camera solution

poohbiwonpoohbiwon USAPosts: 714 ✭✭✭✭✭

Maybe the Axon 9 could make use of this (along with OIS) to give it the camera the Axon 7 never had.

https://goo.gl/kaHtRA
Known as ISOCELL dual, the solution utilizes a software and hardware component optimized for low-light and bokeh-effect shots.

Comments

  • coldheat06coldheat06 Fort Worth, Texas Posts: 1,525 ✭✭✭✭✭✭✭✭

    @poohbiwon that looks pretty interesting!! Hopefully ZTE will use it or create something in house that will be just as good

  • jimlloyd40jimlloyd40 Phoenix, AZ Posts: 15,845 ✭✭✭✭✭✭✭✭

    If ZTE would just incorporate Camera2 API into their phones we could use the Google camera app and it wouldn't cost ZTE anything.

  • razor512razor512 New York Posts: 2,223 ✭✭✭✭✭✭✭✭

    When it comes to photo quality, assuming you are using the standard glass lenses that are mass produced (and used in just about every phone on the market), then the sensor is responsible for around 90% of the image quality you see.

    No matter how good the software is, any image data that falls below the SNR of the sensor, will be lost, and impossible for any type of post processing to recover those details, this is why with even the best smartphone cameras, you often lose fine details even at low ISOs. If you compare a 12 megapixel smartphone photo, to a 12 megapixel DSLR, e.g., the Nikon D700, then you will see that the DSLR has far more detail, even though both have the same output resolution.

    In the case of the Axon 7, even with camera2 api, and the google camera app, you will not get the same quality as a device like the google pixel since it is using a larger image sensor.

    If you want quality that can match the quality of something like the pixel 2, then you need to go with a sensor such as the Sony IMX378.

    Currently the Axon 7 uses a Samsung isocell camera module. While there is no 1:1 comparison within the Axon 7 product line, for the galaxy S7, there were models with the Sony sensor, and others with the Samsung sensor. https://www.androidauthority.com/galaxy-s7-isocell-vs-sony-sensor-camera-680266/

    While the isocell sensors are good with dynamic range, the sony sensors of a similar generation, are able to consistently provide better detail.
    While in the article, they chose the isocell as the winner, I personally would go with the Sony sensor, as many of the other aspects such as white balance,colors, and HDR tone mapping can be improved through software, but detail cannot. Furthermore, if you have a sensor with good detail, and camera raw, then even if they completely mess up with the camera software, you can still get the most out of the camera by having lightroom or photoshop or any other image editor of your choice, process the raw file.

    When it comes to image processing, the SNR of the sensor and the noise characteristics are most important, if you look at the DSLR market, you will see that a massive number of DSLRs from a wide range of brands are all using the same exact 24 megapixel Sony APS-C sensor, and that is because it has good noise performance, and most importantly, almost no color shift in the shadows, thus it is easier for the built in image processing to generate a final image for video and jpegs (for those who are unfortunate enough to still be shooting jpeg on their DSLR), and also easier times processing the raw files as you no longer have to do split toning or isolating the shadows in a separate layer and doing separate color correction when doing heavy shadow recovery.

  • hollaphollap Wisconsin USA Posts: 7,546 ✭✭✭✭✭✭✭✭

    I'd just like to see a great camera in ANY ZTE device. Some are good, but none are great.

  • marcwool1marcwool1 CanadaPosts: 59 ✭✭✭✭✭

    razor512, you're kind of missing the point. With Camera2api and the google camera app, we will get better low-light photos than we do without it. Whether we get photos equal to a pixel phone is irrelevant.

  • musicdjmmusicdjm Worcester,MA Posts: 3,012 mod

    @marcwool1 said:
    razor512, you're kind of missing the point. With Camera2api and the google camera app, we will get better low-light photos than we do without it. Whether we get photos equal to a pixel phone is irrelevant.

    Camera2 Api isnt magic its not installing a new lens or anything to the camera. Camera2 Api can make it look better yes however ZTE can implement there own API and still get the best low light performance available for there sensor without they just need to spend more time and money focusing on making sure camera is one of the top priorities second to only the audio

  • jimlloyd40jimlloyd40 Phoenix, AZ Posts: 15,845 ✭✭✭✭✭✭✭✭

    @musicdjm said:

    @marcwool1 said:
    razor512, you're kind of missing the point. With Camera2api and the google camera app, we will get better low-light photos than we do without it. Whether we get photos equal to a pixel phone is irrelevant.

    Camera2 Api isnt magic its not installing a new lens or anything to the camera. Camera2 Api can make it look better yes however ZTE can implement there own API and still get the best low light performance available for there sensor without they just need to spend more time and money focusing on making sure camera is one of the top priorities second to only the audio

    Camera2 API in and of itself isn't magic. Being able to use the Google camera app with HDR + is where the majic comes in. And yes ZTE could program the picture processing to improve the low light quality but they didn't and the Camera2 API would have been instant improvement without ZTE doing anything else. When I first got the Razer phone the pictures were pretty bad. But since Razer programmed the Camera2 API into the software I installed the Google camera app and instantly had dramatically improved photos. Razer has since improved the processing with updates but I didn't have to wait for improvement.

  • kevinmcmurtriekevinmcmurtrie Silicon Valley areaPosts: 294 ✭✭✭✭✭✭

    @razor512 said:

    . . .

    No matter how good the software is, any image data that falls below the SNR of the sensor, will be lost, and impossible for any type of post processing to recover those details, this is why with even the best smartphone cameras, you often lose fine details even at low ISOs. If you compare a 12 megapixel smartphone photo, to a 12 megapixel DSLR, e.g., the Nikon D700, then you will see that the DSLR has far more detail, even though both have the same output resolution.

    . . .

    There are software tricks. My old OPPO had super-resolution and my old Sony had extended dynamic range. I've seen artifacts hinting that iOS does it too. Essentially, the camera records about a momentary video at full resolution. Software blends together portions of frames that can be aligned using motion compensation. The end result is that a crappy camera can sometimes produce wide dynamic range, sharp images, and no optical motion blur. It's the same trick that's used to produce extremely sharp photos of the night sky with DSLR cameras (The sky moves faster than you'd think).

    Downsides of the software trick: It drains the battery like crazy, it can't tolerate much motion before artifacts appear, and only a handful of Software Engineers know how to do this efficiently.

  • razor512razor512 New York Posts: 2,223 ✭✭✭✭✭✭✭✭
    edited February 12, 2018 2:55AM

    What you described is basically mean stacking, it is something that can be done in post if the subject is not moving. It allows you to remove noise without really sacrificing any detail. The issue is that it does not allow you to bring back any details that were lost due to a high ISO pushing those details below the SNR. Mean stacking where possible will always give better results than the built in noise reduction that a camera app will do. Any details that are right on the edge of being lost in the noise,will be recovered, but anything below the SNR is simply not captured, thus no level of post processing can bring out data that was simply never captured.

    If needed,I did a tutorial on mean stacking with smartphone images. (if you have a good amount of storage, if there is no movement, do a burst shot just in case :).
    https://community.zteusa.com/discussion/2173/android-tips-capture-clean-low-light-photos-hand-held-and-without-long-exposure#latest

    Mean stacking can handle camera movement but no movement in the scene.

  • westside80westside80 Posts: 6,147 ✭✭✭✭✭✭✭✭

    I wish ZTE would just copy everything Google is doing when it comes to the camera. My Pixel XL blows the A7 camera out of the water. It's not even really close.

  • dnewman007dnewman007 las vegasPosts: 3,085 ✭✭✭✭✭✭✭✭
    edited February 12, 2018 5:36PM

    I am not a camera expert, so reading this thread is interesting to me. All I know is the Pixel's and Samsung (S series Galaxy) devices I have tested have excellent point and shoot camera's Hopefully ZTE can get to that level of camera on the new A9 coming out this year.

  • musicdjmmusicdjm Worcester,MA Posts: 3,012 mod

    @westside80 said:
    I wish ZTE would just copy everything Google is doing when it comes to the camera. My Pixel XL blows the A7 camera out of the water. It's not even really close.

    We could only wish, the major challenge with that is the millions Google spent on the many different engineering teams and AI experts on there camera. Even if ZTE used the same sensor and Google's API they still would have to spend considerable amounts of money and development to get to the same level camera wise as Google. Every OEM out there is facing the same challenge, without selling millions of a device it's pretty impossible to spend millions on one component

  • fzrrichfzrrich Buffalo New York Posts: 3,964 mod

    Regardless of your take on which direction ZTE should go, I think we all agree they need to focus on a kick a*s camera for the Axon 9! :)

  • musicdjmmusicdjm Worcester,MA Posts: 3,012 mod

    @fzrrich said:
    Regardless of your take on which direction ZTE should go, I think we all agree they need to focus on a kick a*s camera for the Axon 9! :)

    Absolutely I think we all agree they need to kick it up a notch.

    It would be so smart if Sony just spend all the money it needed to make there own incredible photo tuning apis to sell with there sensors for all OEMs with there mass size of economy of scale they could easily make a profit and all the OEMs benefit. Include a dedicated processing chip on each sensor and then spend the millions of dollars needed in order to complete the aI and everything needed for the photo tuning to be unmatched by any other in the industry

  • kevinmcmurtriekevinmcmurtrie Silicon Valley areaPosts: 294 ✭✭✭✭✭✭

    @razor512 said:
    What you described is basically mean stacking, it is something that can be done in post if the subject is not moving. It allows you to remove noise without really sacrificing any detail. The issue is that it does not allow you to bring back any details that were lost due to a high ISO pushing those details below the SNR. Mean stacking where possible will always give better results than the built in noise reduction that a camera app will do. Any details that are right on the edge of being lost in the noise,will be recovered, but anything below the SNR is simply not captured, thus no level of post processing can bring out data that was simply never captured.

    If needed,I did a tutorial on mean stacking with smartphone images. (if you have a good amount of storage, if there is no movement, do a burst shot just in case :).
    https://community.zteusa.com/discussion/2173/android-tips-capture-clean-low-light-photos-hand-held-and-without-long-exposure#latest

    Mean stacking can handle camera movement but no movement in the scene.

    The Adobe Photoshop stacking you've described is trivially simple and long obsolete on cell phones. Even my 2014 phone had a super-resolution mode that worked to some degree with moving people. It's a lot more sophisticated now. Samsung, Sony, and Apple are keeping their processing secret so it's difficult to know exactly what tech they have. Google is looking into using AI to stitch stacks better.

  • jimlloyd40jimlloyd40 Phoenix, AZ Posts: 15,845 ✭✭✭✭✭✭✭✭

    @musicdjm said:

    @westside80 said:
    I wish ZTE would just copy everything Google is doing when it comes to the camera. My Pixel XL blows the A7 camera out of the water. It's not even really close.

    We could only wish, the major challenge with that is the millions Google spent on the many different engineering teams and AI experts on there camera. Even if ZTE used the same sensor and Google's API they still would have to spend considerable amounts of money and development to get to the same level camera wise as Google. Every OEM out there is facing the same challenge, without selling millions of a device it's pretty impossible to spend millions on one component

    @musicdjm if ZTE would just incorporate the Camera2 API ZTE wouldn't have to do anything with the camera because we could just use the Google camera app with HDR +. My Razer phone when it first came out took lousy pictures but since Razer installed the Camera2 API I installed the Google camera app and the pictures are great.

  • musicdjmmusicdjm Worcester,MA Posts: 3,012 mod
    edited February 13, 2018 11:48PM

    @jimlloyd40 camera api doesn't provide the code to optimize each individual Oems sensor, it's like 2 lines of code edited in the build prop. Google doesn't build the firmware for other OEMs so simply adding Camera² api doesn't solve the problem it simply uses machine learning to produce a better tuned photo, they would still need to get the api working in there camera app as well or get Google to certify there use of preloading Google camera which I imagine requires 3rd party testing or something. They also still need to do all the coding and testing for firmware for the chipset and camera sensor

  • jimlloyd40jimlloyd40 Phoenix, AZ Posts: 15,845 ✭✭✭✭✭✭✭✭

    @musicdjm said:
    @jimlloyd40 camera api doesn't provide the code to optimize each individual Oems sensor, it's like 2 lines of code edited in the build prop. Google doesn't build the firmware for other OEMs so simply adding Camera² api doesn't solve the problem it simply uses machine learning to produce a better tuned photo, they would still need to get the api working in there camera app as well or get Google to certify there use of preloading Google camera which I imagine requires 3rd party testing or something. They also still need to do all the coding and testing for firmware for the chipset and camera sensor

    All I know @musicdjm is that the two phones that I've had that had the Camera2 API installed took dramatically better photos after I installed the Google camera app with HDR +.

  • musicdjmmusicdjm Worcester,MA Posts: 3,012 mod

    @jimlloyd40 said:

    @musicdjm said:
    @jimlloyd40 camera api doesn't provide the code to optimize each individual Oems sensor, it's like 2 lines of code edited in the build prop. Google doesn't build the firmware for other OEMs so simply adding Camera² api doesn't solve the problem it simply uses machine learning to produce a better tuned photo, they would still need to get the api working in there camera app as well or get Google to certify there use of preloading Google camera which I imagine requires 3rd party testing or something. They also still need to do all the coding and testing for firmware for the chipset and camera sensor

    All I know @musicdjm is that the two phones that I've had that had the Camera2 API installed took dramatically better photos after I installed the Google camera app with HDR +.

    Yeah hands down Google's processing software is un-matched

  • razor512razor512 New York Posts: 2,223 ✭✭✭✭✭✭✭✭
    edited February 14, 2018 12:43AM

    Camera2 API is a massive amount of code, the 1 line of code you are thinking about is just the line of code to enable the API which is natively part of android, some device makers just disable it for some reason.

    The main benefit of it is that it allows for additional controls over the camera module, as well as for 3rd party apps to gain access to the raw sensor data.

    Once you have that, it is up to the individual app to process the image data and not the API.

    The benefit from a 3rd party camera app using camera2 API is if it can access the raw data, then things like tonemapping can then be done to the image, thus if you like how the google camera app does tone mapping more than how the ZTE camera app does it, then you can use that. Without that, you are stuck with 3rd party apps getting an already compressed 8 bit per channel output from the camera instead of what would otherwise be a 10-14 bit per channel image.

    I personally feel that every smartphone regardless of price point should offer at least access to the raw image data as it is data that is already captured, they just need to add some additional code to dump the raw buffer to a file, and possibly convert that data to a format like DNG.

    For those who want a sample raw file from the Axon 7, I attached one to this reply as a .zip file.

    https://developer.android.com/reference/android/hardware/camera2/package-summary.html

  • jimlloyd40jimlloyd40 Phoenix, AZ Posts: 15,845 ✭✭✭✭✭✭✭✭

    @DrakenFX if the Camera2 API is natively part of Android what possible reason would an OEM have to disable it? It's more effort for the OEM to disable the Camera2 API then it is to do nothing at all.

Sign In or Register to comment.