Building functional nanostructures with atomic-level precision requires a detailed understanding of materials growth and the physics of self-assembly at the nanoscale. Professor Frances Ross uses transmission electron microscopy to watch crystals as they grow and react, and scanning tunneling microscopy to measure the properties of nanomaterials. These microscopy techniques help her research group explore growth mechanisms during epitaxy, electrochemical deposition, and catalysis for applications in microelectronics and energy storage. She is developing new microscopy instrumentation to enable deeper exploration of these processes.
After earning her undergraduate degree in physics and doctorate in materials science at Cambridge University in the United Kingdom, Professor Ross carried out postdoctoral research at Bell Labs in New Jersey, then joined the National Center for Electron Microscopy in Berkeley, California, as a staff scientist, specializing in high-resolution and in situ microscopy. She continued to focus on development and application of electron microscopy to crystal growth, including both liquid and gas phase processes, as a research staff member at the IBM TJ Watson Research Center in New York. She joined the DMSE faculty in 2018. She is a fellow of several professional societies, including the American Physical Society, Materials Research Society, American Association for the Advancement of Science, Microscopy Society of America, American Vacuum Society, and an honorary fellow of the Royal Microscopical Society. Honors she has received include an honorary doctorate from Lund University in Sweden in 2013 and the IBM Outstanding Accomplishment Award for liquid cell transmission electron microscopy in 2017.
Nanosecond protonic programmable resistors for analog deep learning
Developed programmable resistors, or artificial synapses—devices that can be used to build analog deep learning processors. Compatible with silicon fabrication techniques, these artificial synapses increase the speed and reduce the energy needed to train neural network models.