All conventional lenses have limited resolution. Even with the strongest microscope we cannot see atoms, molecules or nanostructures; for this we need electron or atomic-force microscopy. The wavelength of light sets the limit, half a micrometer for visible light.
Strawberry under the microscope. The picture was taken by a scanning electron microscope, because optical microscopes would not reveal details much smaller than the wavelength of light. Image: David Scharf.
Around 1870 the German physicist Ernst Abbe of the University of Jena established the theory of optical imaging and deduced the resolution limit of lenses caused by the wave nature of light. Before Abbe, making a good lens was a matter of trial and error. Abbe's theory enabled him and his collaborators, the university technician Carl Zeiss and the entrepreneur Otto Schott to create the entire modern optics industry. Carl Zeiss and Otto Schott are still household names more than a hundred years later.
Abbe's formula. This formula, on a memorial stone in Jena, describes mathematically how the wavelength of light limits the resolution of the lens. Image: Frank Kafka.
These days, the resolution limit of lenses limits the microchip technology needed for making ever faster computers and smarter smartphones. Chipmakers photograph the structures of billions of tiny transistors on silicon chips (by photolithography).To meet the insatiable appetite for more and more transistors that need to be smaller and smaller, the resolution limit of lenses forces chipmakers to use light with ever shorter wavelength, which gets increasingly difficult. Is there a way around the imaging problem?
Scanning electron microscope picture of an electronic chip. The details you see are made by photolithography, they are photographed on the chip. According to Abbe's formula, the wavelength of light limits the size of the finest structure one can make.
In 2000, Sir John Pendry of Imperial College London published a remarkable theoretical result : a lens made of a negatively refracting material, called a super lens, is perfect in theory; it circumvents Abbe's limit. Negative refraction does not occur in natural optical materials, but it can be achieved in artificial metamaterials. Pendry's paper  initiated and inspired the entire research area of metamaterials that has become one of the most lively trends in physics and engineering. How does the negatively-refracting lens do the trick?
It all comes down to magic. Well, not really, but we found  that the negatively-refracting lens does something another "magical device" - the invisibility cloak - does as well: it transforms space. The optical transformation of space alone is not something unusual, because ordinary optical materials like glass or water seem to change our perception of space as well - things inside them may appear at different places than where they actually are. We may say that a piece of glass transforms space for light . Yet what the negatively-refracting lens does is something no ordinary material can do: it folds space . To understand this, imagine three-dimensional space as a stack of two-dimensional sheets, like a stack of paper. The lens appears to fold each sheet like this:
You can try out the secret of perfect imaging with a sheet of paper. Fold it as shown in the picture. Then cut a hole through the fold and open it. You get three identical copies of the hole. In optical imaging, the first hole represents the object you want to see, the other two holes are the images. One is formed where the fold went backwards; this is where the device does its magic, so the first image is formed inside the lens. The second image appears on the other side of the lens and is the image one could see. The images and the object are absolutely identical, the imaging is perfect, at least in theory. But does it work in practice? For forming the perfect copies of the original at two other positions in space the light must hop to them in an instant, which, according to Einstein's relativity, is impossible over macroscopic distances . Therefore only so-called "poor man's perfect lenses"  have worked where the images are less than a wavelength away from the original. Clearly, such lenses are of rather limited practical use.
James Clerk Maxwell was one of the world's greatest scientists. 150 years ago he wrote the equations of the electromagnetic field and discovered that light is an electromagnetic wave. Most of modern technology (electricity, optics, television, wifi etc) owes a great debt to him. Perfect imaging is directly inspired by Maxwell.However, there is a positive alternative to perfect imaging with negative refraction. All we need to do is take an idea  of the Scottish polymath James Clerk Maxwell from 1854 into the 21st century. As a student at Trinity College in Cambridge, Maxwell wrote down the formula for a lens that reminded him of the eyes of fish (legend has it that he dreamed up his lens by musing over the eyes of a kipper at breakfast). In Maxwell's "fish eye"  light rays from any point faithfully meet at a corresponding image point, all light rays from the object make it to the image. If light would consist of particles that follow the ray trajectories it would form a perfect image. But light is also a wave; and we know from Abbe that it is the waviness of light that limits the resolution of lenses. It was assumed that in reality the resolution would again be about half the wavelength. In 2009 we predicted  that this is not so: Maxwell's fish eye should image waves with perfect resolution. This contradicted the accepted wisdom of subwavelength imaging  and created controversy . Now there is strong experimental evidence  that it works. To understand why it works we need a bit of Einstein.
Einstein's idea of curved space explains why Maxwell's fish-eye lens forms a perfect image.
We mentioned already that optical materials change the perception of space for light. They conjure up a virtual space for light - the space as seen by light - that may be very different from physical space. (This can cause optical illusions, for example.) The virtual space can be curved in a similar way as space-time is curved due to gravity according to Einstein's general relativity. Curved space is not too difficult to imagine, in particular two-dimensional curved space. The simplest curved space is the surface of a sphere. And, as it happens, the sphere's surface is exactly the virtual space of Maxwell's fish eye. In 1944 Rudolf Luneburg  explained how Maxwell's fish eye is connected to the virtual sphere: by the stereographic projection illustrated below.
Stereographic projection. The sphere (red) is projected to the plane (blue). The figure shows a cut through the sphere. A point (red dot) on the sphere with coordinates X, Y, Z is projected from the North Pole N (black dot) to the plane (blue dot) with coordinates x and y (here Y=0 and y=0). The South Pole S is projected to the origin and the North Pole to infinity.
The virtual space of Maxwell's fish eye is the surface of the sphere. Now, imagine light is confined to propagate on the sphere (one can demonstrate this in physical reality - with a real sphere, not a virtual one - using a glass sphere where light clings to the surface).
In Maxwell's fish-eye light goes in circles (yellow) on the virtual sphere. Circles on the sphere appear as circles (red) in the plane by stereographic projection.
On the sphere, light no longer propagates in straight lines, but it would follow the shortest lines between two points - the geodesics, which are curved. These are the great circles (circles with the centre at the middle of the sphere). They have the following property: all great circles starting from one point on the sphere end up at the antipodal, conjugate point. (Try it out with a globe!) As the great circles are the paths of light rays on the sphere, all the rays emitted from a given point must meet again at the antipodal point, and this is true for all points of emission. The antipodal points form a perfect image of all the points of the original. From the symmetry of the sphere also follows that light waves are as perfectly focused as light rays: a wave emitted from any point on the sphere will focus at the corresponding antipodal point with point-like precision. As Maxwell's fish eye implements the geometry of the sphere it may create perfect images.
Focusing in curved space. In Maxwell's fish eye electromagnetic waves propagate in a plane in physical space (wave pattern below) as if they were confined to the surface of a sphere (above). A wave emitted from any point on the virtual sphere is focused at the antipodal point. In physical space, waves are as perfectly focused as in virtual space.
To test the imaging with Maxwell's fish eye we have built one for microwave radiation . The principal person behind the device is Yun Gui Ma who was at the time at Temasek Laboratories, National University of Singapore (now he is at Zhejiang University in Hangzhou, China). Like light, microwaves are electromagnetic waves, but with cm wavelengths and GHz frequencies, which allows us to investigate the electromagnetic fields with a degree of detail currently impossible for visible light. We implemented the device for microwaves confined between two metal plates that therefore lets them propagate in a plane. The device is inserted between the plates. It looks a bit like a microwave cloaking device and it is made of flexible circuitboard and dielectric powder. (We did it on a shoestring budget, so it has an old-fashioned, Maxwellian look.) Here is a picture:
Maxwell's fish eye for microwaves. The device is surrounded by a metal mirror. The "home-made" metal structures on the rings modify the electromagnetic properties of the device. We also needed some dielectric powder to top them up. The electromagnetic response of everything combined agrees reasonably well with Maxwell's theoretical formula.
Then we inserted coaxial cables through the bottom of the device (that was sandwiched between two metal plates - not shown in the picture - where we drilled holes for the cables). Some of the cables are linked to a microwaves synthesizer that may also serve as a microwave detector. The other cables lead to perfect "dead ends" - absorbers that swallow all the incoming microwave radiation without sending it back. For our microwaves these outlets play the role of optical detectors that absorb visible light (and then convert it into signals - we don't do that for keeping the experiment simple). A cable inserted through the top is linked to a detector port of the microwaves synthesizer; this is the cable we use for detection. We can move the plates relative to each other so that the top cable can scan the entire electromagnetic field. In the first round of our experiments we observed this:
First tests. Figure (a) shows the scheme of our first round of experiments: microwave radiation is injected through the source cable and, possibly, extracted at the outlet cable. If we use an outlet we place it at the point where we expect the perfect image of the source to be formed. The two figures on the right display scans of the microwave intensity in two situations, with (c) and without (b) outlet. We see in (b) that without the outlet we do not get a super-sharp image, but (c) shows a needle-sharp spike, a nearly perfect image of the source.
We found something strange and unexpected . We got a perfect image of the source, but only if we place an outlet - playing the role of a detector - at the image point. Otherwise we observed a spot that agrees with Abbe's theory. The perfect image appears, but only when you are looking for it. How can we understand this? The easiest way to make sense of electromagnetic waves in Maxwell's fish eye is not to imagine them in physical space, but on the virtual sphere. We know that the two pictures are equivalent. From the symmetry of the sphere it should follow that the wave emitted by the source comes to the image point exactly like the emitted wave, but run in reverse. However, the symmetry between emission and focusing is not perfect as long as we do not have the counterpart of the source at the image. We need a source run in reverse there that, instead of emitting waves, absorbs them; we need an outlet for perfect imaging.
One may object that these experiments don't show imaging at all, just the transfer of waves between the source and an absorber that already happens to be at the right position. So we did another experiment where we placed two sources close to each other and used 10 outlets. In this case not all of the outlets are at the correct positions, only two of them. The sources were so closely spaced that, according to Abbe's formula, one should not be able to distinguish them; their image should be blurred. Yet our array of outlets has resolved them :
Experimental evidence for imaging with subwavelength resolution. Source: Two sources (red dots) are placed 0.2 of the local wavelength apart; their scanned field is shown. Image: an array of 10 outlets (red dots) are arranged in an arc in the bottom plate; they act like a detector array. A single outlet (blue dot) inserted through the top plate is moved along the arc over the image outlets and records the intensity. The picture shows the measured intensity normalized with respect to the maximal intensity. The red dots indicate the positions where outlets are present. This detector array clearly resolves the two sources, which unambigously proves subwavelength imaging.
We even used different types of cable endings for the sources and the outlets, for making sure that the outlets have the maximal resolution possible in our device (the resolution is determined by the structure size of the material). Incidentally, this arrangement also demonstrates that source and detector (in our case source and detector cables) can be very different. This means, in the case of optics, that in order to see a certain molecule one does not need the same molecule for detection. Furthermore, our experiment shows that the field concentration at the detectors only happens where it should happen; the peaks we see are not caused by the electromagnetic field always becoming concentrated at places where the wave can leak out, because in that case all outlets should have enhanced the field, but they did not. Note also that the distance between source and image is about 3 wavelengths in our experiment. There is no principal reason why the imaging distance cannot be made much longer, because the materials required are not very absorptive. Still, Maxwell's fish eye is a highly unusual "lens" where object and image are inside the material, not outside it, like in a normal lens, where the object is on one side and the image on the other. But this is not the last word: there are alternative perfect lenses where object and image are in empty space, for example Minano's lens .
Minano's lens. Light rays emitted from any point inside the "lens" are focused at a corresponding image point.
Perfect lenses with positive refraction are unusual optical instruments - they simply cannot be traditional lenses, because otherwise they would obey Abbe's theory (see also this page). These "lenses" rather resemble mirrors surrounding both object and image. In this way they capture and focus all the light so that no information is lost. But we have also seen that the perfect image only appears when one looks. It is even possible to make magnifying perfect lenses, but so far the magnification is given by the refractive-index contrast, which is not great in ordinary materials. Yet for the optical systems required for the electronics industry a magnification of 4 would be already sufficient. Future research will tell how far the magnification of perfect imaging devices can go.
Summary.- We have made the first step towards perfect imaging : we have demonstrated by microwave experiments  that the resolution of optical instruments with positive refraction over macroscopic distances is no longer limited by the wave nature of light. In our case, the resolution is given by the structure-size of the material. With smooth materials  - not metamaterials - imaging can be made perfect.