"I was curious to see what a camera could "see" if we removed the lens from an image sensor"
Many labs around the world are studying light-matter interaction at the nanoscale. The work of Rajesh Menon, an expert in micro/nanofabrication and photonics, and a Professor at the University of Utah, stands out for its scope and creativity. His team pushes photolithography and imaging beyond the diffraction limit, developing optical elements for photovoltaic systems and algorithms for computational imaging. Rajesh Menon has kindly agreed to tell The Lithographer more about his research.
The Lithographer (TL): First of all, thank you for finding time for the interview. I hope your research scope is described here correctly. Would you like to highlight anything further?
Rajesh Menon (RM): Thank you for the interest. We are a team of scientists and engineers who work on approximately 3 areas: Nanophotonics, Nanofabrication and Computational Imaging. Our goal is to combine these areas to enable applications that are not otherwise possible. To give you an example, we have recently shown that a single nanofabricated flat lens can be achromatic over a very large operating bandwidth (from the visible to the long-wave Infrared), something which was generally thought to be impossible! 
TL: That is really impressive! And how does the computational imaging come into play here?
RM: Computational Imaging is very important now that we appreciate imaging as a form of information transfer. Another way to look at it; conventional cameras are anthropomorphic, but information doesn't have to be. This opens up enormous design spaces for new "non-human" cameras. For example, such cameras could be used where the hardware/software design implicitly includes privacy (like in the case of facial recognition) and no human may be in the data loop.
TL: How did you arrive at this specific research area?
RM: Most research projects arise from either solving problems that we face within existing projects, or while meeting with scientists from different areas, or simply from curiosity. For example, back in 2003, I had lunch with Stefan Hell (the 2014 Nobel Laureate in Chemistry), who was then discussing his technology on super-resolution optical microscopy. As a young scientist, I was inspired to take his ideas and apply them to optical lithography, which led me down a more than a 10-year productive research journey. At some other point, I was curious to see what a camera could "see" if we removed the lens from an image sensor. At that time, I was learning about the information theory of imaging. A very sharp undergraduate student took this project up over a summer and made very clever experiments to show that indeed imaging could be achieved. 
TL: Speaking of students, how is your team organized? Where are your former students now, do you work with any of them?
RM: We are a fairly decentralized and non-hierarchical team of scientists and engineers. The team workflow is more organic than strict, and it changes based on the problems that we are trying to solve. I should point out that we are heavy users of remote collaboration tools, such as Slack, Zoom, etc. We collaborate widely across disciplines (Computer Science, Math, Physics, Biology, Neuroscience, Materials Science, Chemistry, etc.) and also across the planet, as one person can't have sufficient expertise to tackle all these knowledge areas. The most fruitful collaborations happen when there are mutually aligned interests, naturally. Most of my former students are working in industry (including companies like Intel, Apple, Micron, etc.) and a few are in government or academia. I currently collaborate with two of them on projects in the field of micro- and nanophotonics.
TL: Your publications suggest a long history of maskless and sub-diffraction limit optical lithography. So here is a chicken-and-egg question: do microfabrication capabilities give you ideas for what kind of devices to make, or do you do have an idea for a device first, and find the way to make it with the tools at hand?
RM: As a student, I was building the nanofabrication tools, and at that time applications were chosen to showcase the capabilities of the tools. Nowadays, this has flipped to some extent, so the device requirements come first. Part of the reason is that we have such a large palette of tools to choose from, of course.
An ultra-compact nanophotonic polarization rotator. Photonic functions can be encoded as subwavelength pixel distributions that are computationally optimized. Such devices are called digital metamaterials. 
TL: You are using a few interesting mix&match micro/nanofabrication methods. Can you tell a bit more about them?
RM: With the amazing commercial developments in nanofabrication tools and processes, we are in a golden age of where we can dream up devices and get them made. So, we stand on the shoulders of all the giant engineers and scientists who have made this possible. We work on two types of devices. First, those driven by system-level requirements that are technologically-speaking more mature, and typically fabricated using standard processes (sometimes mix & match). Second, the devices for which it is not clear what the best fabrication method is. Such devices, of course, require lots of experimentation and discussions with fabrication/process experts. Nowadays, we use grayscale direct laser lithography, focused ion beam lithography, scanning electron beam lithography, nanoimprint lithography and contact photolithography.
TL: You are pushing your direct laser lithography tool (µPG from Heidelberg Instruments) to the limit of its performance. What are the most special things you have used it for?
RM: I am very fond of the µPG as its flexibility enables us to do very nice things. It is hard to select one of my favorites, but we have made lots of grayscale microstructures, such as flat lenses, broadband holograms, and many other micro-optics elements. We are still using it to make devices that have never been demonstrated before. 
TL: You also develop Absorbance-modulation-optical lithography (AMOL) and Patterning via Optical-Saturable Transitions (POST) photolithography techniques. Have other research or industry labs adopted these methods yet?
RM: These lithography methods are still under development . A lab at MIT is exploring AMOL and related approaches, as far as I am aware. AMOL, POST and other related techniques will hopefully get incorporated into commercial systems in the near future, but I don't have a good intuition for how long it will take to reach process maturity.
SEM image of a flat-lens comprised of concentric rings of varying heights.
TL: A few papers came out recently about your work with 3D micro-optical elements, specifically, flat lenses. What is so special about them?
RM: These flat lenses, which we call multi-level diffractive lenses, are special for a few reasons. They allow efficient broadband imaging; image aberrations can potentially be corrected with a single lens; they are extremely lightweight, thin and could be manufactured at low cost. These devices arise from a simple, yet fundamental insight: In imaging, the sensor only records the intensity of light and ignores the phase. As a result, the phase distribution of light in the image plane can be considered as a free (unconstrained) variable. We showed that this insight leads to the fact that the ideal lens can have an infinite variety of solutions, not a single parabolic function, as is often taught in Optics textbooks. This large degeneracy of the lens phase-transmittance function enables new lens designs, such as a lens that is achromatic over a huge bandwidth , or a lens with an extreme depth of focus , as we also demonstrated recently. Such lenses can be used in thin cameras (security, consumer electronics, etc.), lightweight cameras (aerospace, UAVs), LIDAR, etc.
In imaging, the sensor only records intensity of light and ignores the phase.
TL: How difficult is it to fabricate them?
RM: Ideally, we would fabricate a master containing the geometry of the flat lens design. Then, use low-cost replication technologies (like nanoimprint lithography), we could set up large volume manufacturing. The key fabrication challenges are resolution and scalability to larger areas. However, I should point out that these challenges are vastly easier to address for the flat lenses than in case of metalenses or other nanophotonic devices.
TL: Can you tell more about computer-aided brain imaging?
RM: The motivation for this project is to gain insights into the mechanisms of normal cognitive functions such as how long-term memories are acquired and stored, and also to study various psychiatric conditions. We aim to record high-resolution videos from the deep part of an animal brain with as little trauma as possible. This technology, which we call "Computational Cannula Microscopy", involves surgically inserting a cannula (an optically transparent needle) into the brain of a mouse. The mouse is genetically engineered to express fluorescent cells, or they are transfected via a virus-based injection. Light is then piped into the brain to excite these cells. The emitted fluorescence is collected and piped back to the outside world via the same cannula. Since the cannula is not an imaging element, spatial details of the fluorescent cells are lost during this process. We developed algorithms to convert this "jumbled mess" into an image that can be distinguished by humans. 
One of the main problems that all computational imaging technologies have to deal with is the impact of signal-to-noise ratio on image resolution and field of view. We now employ machine learning for this task. This project is a good example of a problem that requires both new algorithms and improved hardware (for example, nano- or micro-engineering the geometry of the cannula). In fact, we are just starting to 3D print customized cannulae. This is a very exciting area of research that combines input from computer scientists, neuroscientists and optical engineers.
TL: It would be very interesting to see how this project will develop! One final question: if you could fabricate any kind of micro- or nanodevice and did not care about difficulty, cost or time of production, what would it be?
RM: Emergent behaviour is common in the natural world. Think about consciousness in our brain. I believe there is something similar that arises when a huge number of non-repeating photonic unit cells work together to create emergent macroscale photonic behaviour/properties. Such phenomena are extremely hard to explore due to the massive computational and also fabrication challenges. So, to explore such emergent photonic phenomena would be amazing if the fabrication and computational limitations could be overcome.
 P. Wang, N. Mohammad & R. Menon, Scientific Reports v. 6, #21545 (2016)  G.Kim and R. Menon, Optics Express 26, 22826-22836 (2018)  S. Banerji, et.al., arXiv preprint arXiv:1910.07928  N. Mohammad, et.al., Scientific Reports v. 8, art.n. 2799 (2018).  A. Majumder et.al., OSA Continuum 2, 1754-1761 (2019)  G. Kim et.al., Scientific Reports v. 7, Art.n.: 44791 (2017)
Learn more about the Heidelberg Instruments' tool for grayscale lithography DWL 66+ and universal table-top maskless aligner µMLA or send a request to Heidelberg Instruments' Sales team.