Megapixels: what they are and how much they really matter

The term Megapixel represents one of the values ​​with which cameras and photographic sensors are described. But how much do Megapixels really matter in cameras digital?

Talking about digital photography some users are still positively impressed today when they are faced with high values ​​in terms of Megapixels. In reality, Megapixels do not directly have anything to do with the photo quality: what is of great importance are theoptics and the quality of the RGB sensor that equips the photographic device. We will talk about it in more detail later in our article.

Once upon a time it would have been implausible. Instead, today the software together with the artificial intelligence algorithms for theimage optimization are gaining more and more “say”. We saw it in the article where we explain how to avoid blurry or blurry photos using it smartphone.

What are Megapixels

The pixel is the smallest conventional unit of the surface area of ​​a digital image. The pixels that make up an image align to form a rectangular grid: theirs dimension e density is variable but their combination allows you to obtain the perception of a unique image.

Starting from the resolution in pixel of the photos produced with a device, it is possible to immediately trace the camera’s Megapixels. For example, if the images have a resolution of 4000×3000 pixels, by multiplying them we know that they are composed of 12 millions of pixels or 12 Megapixels.

The photographic sensor and image acquisition

Il photographic sensor of a smartphone is a device that captures incoming light through theobjective and produces a digital image. The surface of a sensor contains millions of photoreceptors: they overlap with the concept of pixels and are responsible for capturing light. THE photons captured by each photoreceptor, they are interpreted as an electrical signal. The intensity of this signal varies depending on the number of photons captured.

Imagine each pixel as a bucket that collects rainwater. The rain is there luce that enters the photographic sensor.

If the bucket is filled to the top, the camera’s processor determines that it is a pixel bianco; if the bucket is empty you have a pixel nero. Anything else in between will be a particular grayscale.

Sensor of a smartphone: how color rendering works

Smartphone sensors don’t see the color: use an array of colored filters (colour filter arrayCFA) to correctly reproduce color information in the final digital image.

The most common filter is known as Bayer Filter Array and consists of the alternation between rows of the three primary colors: red, green and blue. Half of matrix is composed of green filters while blue and red occupy a quarter each: the reason is that our eyes are naturally more sensitive to green light.

If a sensor only receives information about the colors red, green, and blue, how do pixels collect information about secondary and tertiary colors? THE secondary colors e tertiary derive from the combination of primary colors in the RGB (Red, Green and Blue) or CMY (Cyan, Magenta and Yellow) color model, which are the most common color models used in digital imaging and printing.

The collection of chromatic information occurs through a interpolation process known as demosaicizzazione: il camera processor (ISP, image signal processor) calculates the missing color values ​​in each pixel by examining the color values ​​that distinguish neighboring pixels.

Megapixels, size and printing of photos

Generally speaking, the more Megapixels you have available, the larger the image you can obtain. However, it is not certain that a sensor capable of acquiring a photo with a lower number of Megapixels offers a lower quantity of details compared to a more “generous” sensor.

In the past we have seen the best parameters for scanning photos and documents, printing them for example on A4, A3 or other formats.

To print a photo at 300 DPI on a two-page album measuring 40×20 centimetres, you need to use a photo from 40/2,54*300 = 4.724 pixel per 20/2,54*300 = 2.362 pixels therefore produced by an 11 Megapixel sensor (4,724 x 2,362 is approximately equivalent to 11 Megapixels). The constant 2.54 is a useful value for the conversion to inches.

Likewise for print an A4 format (21 x 29.7 centimetres) serves a photo of 3,500 x 2,480 pixels (approximately 8.6 Megapixels).

For example, Printing photos from WhatsApp is generally not advisable unless you are limited to 10x15cm or smaller.

For them small format prints (postcard), for publication on the Web or display on the monitor there is no reason to be hypnotized by devices capable of offering dozens of Megapixels.

Because Megapixels are a false myth

To understand that Megapixels are a “false myth” just check one aspect: digital reflex cameras costing hundreds of euros and “few” Megapixels are able to produce much more detailed images than compact digital cameras with tens of Megapixels. It follows that the size of the sensors and their technical characteristics matter much more than the number of Megapixels.

It is also worth highlighting that a sensor capable of acquiring 12 Megapixels does not allow you to obtain prints that are twice as large as a 6 Megapixel one. Using the simple formula seen previously, in fact, the first sensor can allow printing a 300 DPI in the 33.87 x 25.4 cm format while the second is capable of producing photos that can be used to print a maximum of 23.84 x 17.88 cm.

As you can see, doubling the Megapixels doesn’t double the size at all print format.

The progress made by modern smartphones with quality sensors

In short, many Megapixels do not necessarily lead to obtaining quality photos. The moderns mid-high range smartphone have made enormous progress on the photographic side, as can also be seen on DxOMark). The advent of the new sensori Sony e Samsung it represented an important step.

It must be said that even the recent ones 200 Megapixel smartphone sensors or more exploit technologies that combine light information from multiple pixels (pixel binning). The Samsung ISOCELL HP2 photo sensor is one of the most advanced available in a commercial product.

ISOCELL HP2, for example, takes advantage of the technology Tetra2pixel which combines up to 16 pixels into one, allowing the sensor to operate as a 1.2 µm or even 2.4 µm pixel. The goal is evidently to capture details even in cold conditions poor lightingoffering fast focusing capability and a high frame rate per second when recording high-resolution video.

The ISOCELL HP2 sensor is also equipped with technology Super QPD for improved autofocus in mediocre lighting conditions, allowing you to capture movement even in low-light environments 1 lux of lighting (room lit by a single candle).

How technologies work like Tetra2pixel

As mentioned, Tetra2pixel uses an innovative technique pixel binning Advanced: The sensor can simulate different pixel sizes to adapt to various lighting levels. In low light conditions, the sensor combines four or even sixteen adjacent pixels into one, simulating a larger pixel.

Tetra2pixel represents the latest iteration of Samsung’s pixel binning technologies: the previous ones were ChameleonCell, Tetracell e Nonacell.

In the case of Tetracell and sensor Quad Bayer 108 Megapixel acquires information about light from four photoreceptors by combining data from four 0.8 μm pixels into a single 1.6 μm pixel. With Tetra2pixels, as we have seen, can reach up to 2.4 µm. In the example of Tetracellafter the 4:1 conversion the resulting image will however have a resolution equal to “only” 27 Megapixels (108 Megapixels divided by 4).

Only in lighting conditions during the day, for example in open environments with plenty of natural light, the sensor will be able to adapt the image acquisition mode, creating a 108 Megapixel shot.

Nonacell is able to collect light information from nine neighboring pixels (3×3 matrix) while with ChameleonCell Samsung si spinge a 16 pixel.

The list of

LEAVE A REPLY

Please enter your comment!
Please enter your name here