Why Megapixel? Lenses don’t have a Pixel-Structure after all!

2015-02-19

Why Megapixel? Lenses don’t have a Pixel-Structure after all!

Some centuries ago, people noticed with some surprise, that in a dark room sometimes an (upside down) image of the environment is projected across a small opening in a wall.
The old latin word for room (chamber) is camera.
That’s why the first cameras got the name “camera obscura” ( = “dark chamber”). One of the first real-life-applications was portrait painting.
The same principle is used in so called “pinhole-cameras”:

Its immediately clear, why the image is upside down.
The advantage is however, that the Image is were it would be mathematically expected. There is no distortion ! (rectangles on object side become rectangles on image side). There’s no visible dependency from the wavelength. The depth of Field is infinitely large.
The disadvantage is that the resulting image is very dark, (so the room must be even darker for the image to be seen at all. The needed exposure times to take an image with todays cameras could well be minutes!

Idea: Lets use a larger hole :

Now, however, the image not only gets brighter (as intended) but also gets blurry, because the light not only passes through the center of the hole. So not only the correct position of the image is exposed to the light, but also the direct neighbours.

As a result, the image of an object point is not just a point, but instead a little disk, the so called “Circle of Confusion” (CoC).

For long distance objects the diameter of the CoC equal the diameter of the hole!
For short distance objects, even larger. Read, the “resolution” is very bad.

Whish: Each image point shall be just a mathematical point and not a circle.

Idea: lets place a biconvex (“collecting lens”) lens into the hole:

Note: every point of the front lens is reached by light from the object.

How to predict what size the image will have and where the position of the Images of object points is?

Two simple rules apply:

Image construction:
Rays through the center of the lens pass straight through the lens.
Rays arriving parallel to the optical axis and through the object point are “bent” through the focal point of the lens.
Where these two rays meet, is the image of the object point.

We note:

All object points on the plane perpendicular to the optical axis (the “object plane”) are mapped to another plane perpendicular to the optical axis, the “image plane”.

If image and object distances are given, we can calculate the focal length of the lens.
This approach is used in all them focal length calculators online.

In real life, we notice a slight difference between the theoretical values and the real distances:

Due to this difference between theory and particle :

All focal length calculators that ignore the thickness of the lens give just approximate results, especially in short distances and for wide angles

But even the model of the thick lenses (the “paraxial image model”) works with

Implicate assumptions:
The lenses are perfect, say don’t have optical aberrations.
In case of the thin lenses : all lenses are infinitely thin.
Monochrome light is used.
The model assumes sin(x) = x, which is an approximation that holds only very close to the optical axis.

There’s good and bad news :

Good news: The Circle of Confusion (“CoC”) can be drastically reduced by the use of collecting lenses

We also notice that :
Objects at different distances result in CoCs of different size.
The “acceptable” maximal size of the CoC thus results in the so called “depth of field”

Bad news: The Circle of Confusion (“CoC”) can not become arbitrarily small. It will always stay a disk and never becomes a mathematical point.

Say : there are no perfect lenses. (even if they could be produced arbitrarily accurate)

The theoretical size of the smallest CoC possible even for close to perfect lenses , so called diffraction limited lenses) is described by the so called Rayleigh criterion.

Rule of Thumb:
For white light it’s not possible to generate CoCs smaller than the F# measured in micrometers.
The theoretical resolution is half thatvalue

A diffraction limited lens of F#4 can not generate image points smaller than 4um in diameter.
The theoretical best resolution is 4um / 2 = 2um

An image appears focused, if the CoC is smaller than the pixel structure of the sensor.
See also Why can color cameras use lower resolution lenses than monochrome cameras?.

If the image can appear focused on a sensor with n megapixels, then the lens is classified as an n Megapixel lens

Keep in mind that the Magepixel refer to the maximum image circle that a lens has. If a sensor uses just 50% of the area of image circle, only half the pixels are supported.

If a 5 Megapixel 1″ lens (i.e. image circle 16mm) is used on a 1/2″ sensor (image circle 8mm) one should not expect a resolution better than 1.3 (!) Megapixels. This is because the area of a 1/2″ sensor is 1/4 (!) of the area of an 1″ sensor!. So you lose factor 4 of the Megapixels.

How useful was this post?

Click on a star to rate it!

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Uncategorized
About sp
Text Widget
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus quis neque vel quam fringilla feugiat. Suspendisse potenti. Proin eget ex nibh. Nullam convallis tristique pellentesque.