Challenges Grow For CD-SEMs At 5nm And Beyond
Source: | Author:GREGORY HALEY | Published time: 2023-04-10 | 387 Views | Share:
But this technology continues to evolve to keep pace with smaller features and increasing complexity.

CD-SEM, the workhorse metrology tool used by fabs for process control, is facing big challenges at 5nm and below.

Traditionally, CD-SEM imaging has relied on a limited number of image frames for averaging, which is necessary both to maintain throughput speeds and to minimize sample damage from the electron beam itself. As dimensions get smaller, these limitations result in higher levels of noise, which in turn can limit the quality of CD-SEM images at the most advanced nodes.

First introduced in 1984, CD-SEMs consistently have been the technology of choice for representing the critical dimensions (CD) of device features without sacrificing high precision and accuracy. Because the quality of CD features determines the performance and yield of a device, precise measurement and control are critical. Today, prices for 3nm node (N3) wafers can exceed $15,000, with estimated yields somewhere between 50% and 80%. Every percentage point increase in yield can significantly impact revenues, so the need for accurate CD-SEM images has never been greater.

CD-SEM challenges below N5
Some of the difficulties for CD-SEMs below N5 are familiar. Contamination, sample damage, resolution, throughput, and signal noise are just more acute versions of long-term challenges faced by CD-SEM technology for years. Others, however, are relatively new, such as managing stochastics, or the EUV-driven transition to thinner photoresists that reduces image contrast.

CD-SEM equipment manufacturers and analytics companies are working hard to overcome these challenges and provide the most accurate scans possible without slowing production. The latest evolutions in CD-SEM technology employ a variety of creative and cutting-edge solutions to improve their performance.

With each shrink, the margin for errors in circuitry dimensions also shrink. Below N5, the impact of stochastic effects on device performance becomes increasingly significant. Line-edge roughness (LER) and shot noise, for example, are random variations in circuitry architecture that can lead to unpredictable fluctuations in device performance, including variabilities in threshold voltages and current leakage. At N3, stochastic errors can become large enough to cause defects, such as line breaks and missing holes or merged contact holes.

“For EUV, we now have to worry about extremely rare events, because these rare events are happening and killing devices,” says Chris Mack, CEO of Fractilia. “Device manufacturers are building billions, tens of billions, hundreds of billions of contact holes for one device. If one of those holes goes missing, the entire device could be non-functional. Stochastics account for more than 50% of a high-volume manufacturer’s (HVM) total patterning error budget.”

Fig. 1: Stochastic variability is consuming the edge placement error budget. Source: Fractilia

Fig. 1: Stochastic variability is consuming the edge placement error budget. Source: Fractilia 

Noise is the enemy of signal
Identifying CD stochastic errors below N5 is particularly difficult because SEM noise is a real challenge. A CD-SEM uses a beam of electrons to scan across one row of pixels in an image, and then moves to the next row, scanning over and over until it gets a complete image. Then it repeats that whole process, from 4 to 16 times. The resulting images are an average of the individual measurements.

Engineers can increase the number of frames to reduce noise in the image and improve the overall signal, but greater electron bombardment increases the potential for sample damage and reduces throughput. There is a constant tradeoff between throughput, noise, and sample damage. Finding the right balance between these factors is crucial.

Fractilia develops software that aims to create an unbiased measurement of actual stochastic variation by measuring the noise itself rather than filtering it. It then uses that data to determine a flat noise floor. That noise is subtracted from the metrology results, allowing device manufacturers to reduce the number of frames of averaging for greater throughput and less sample damage, while still maintaining high enough accuracy and precision for decision-making in the fab.

“To do that, however, you have to have an edge detection system that is least sensitive to noise and that allows you to robustly find all the edges of your features, even for super noisy images,” Mack says.

Fig. 2: Determination of unbiased linewidth roughness. Source: Fractilia/imec

Fig. 2: Determination of unbiased linewidth roughness. Source: Fractilia/imec

Changing the number of averaging frames changes the amount of noise in a CD-SEM image. By measuring and accounting for this noise in the results, the unbiased roughness is unaffected by the amount of noise in the image. This allows manufacturers to reduce the number of frames, which increases throughput and reduces sample damage, an important consideration for EUV and high-NA EUV with thinner resist layers and smaller features that are more susceptible to damage.

The resolution problem
Another big challenge for CD-SEM tools is that resolution is not increasing as fast as the devices are shrinking, so the signal is not getting better.

“CD-SEM resolution has improved by a factor of two during the time period where the feature sizes have been reduced by a factor of 20,” says Mack. “The uncertainty of CD-SEM measurements is increasing because we’re reaching a point where getting accurate images is seriously problematic. We just don’t have enough resolution.”

Uncertainty in CD-SEM measurements is ruled by the spot size of the electron beam, which is 1nm. It was 1nm when feature sizes were 20nm, and it’s still 1nm with some critical features approaching 15nm. When device features were 20nm, a 1nm beam could measure a sample to within plus or minus 0.1nm. At N5, it’s still plus or minus 0.1nm. As a percentage, uncertainty in CD-SEM images is growing as features shrink.

A way to overcome this uncertainty is to increase the spot size of the electron beam and increase the field of view (FOV), which allows more data to be collected by a single beam for averaging without slowing throughput or increasing the risk of sample damage. That is exactly what suppliers have done.

Most CD-SEM companies offer tools with larger fields of view to meet these challenges. ASML’s HMI eP5 system, for example, offers a 12,000 x 12,000 pixel FOV at 1nm resolution. This larger field of view dramatically increases the volume of data available, and it can make up for some of the lack of precision by making more measurements while maintaining throughput and beam intensity. If you can increase the volume of data, it’s possible to make up for the uncertainty of any one, or a few, data points by having many more data points to average together.

The computational curve
Companies also are turning to new computational techniques to get more information about CDs below 20nm out of the existing CD-SEM technology. Advanced pattern recognition algorithms using increasingly powerful GPUs are able to handle larger amounts of data than ever before, and are capable of predicting, identifying and classifying features with high precision and accuracy down to a few nanometers.

The challenge is how to accomplish these computations quickly to maintain throughput with the increasingly massive amounts of data being collected at these advanced nodes. A 1,000 x 1,000 pixel image, for example, is 1 million pixels of information with 255 grayscale levels per pixel. During high-volume manufacturing, chipmakers may have dozens of CD-SEMs operating at full speed, so the amount of data being collected every second is immense. Most of that data goes to waste due to a lack of computational power to analyze it while maintaining throughput.

AI/ML is being adopted by some fabs to help accumulate and process the massive amounts of data being produced by CD-SEMs, which can approach hundreds of gigabytes per beam per millimeter. Advanced denoising algorithms can help separate the signal from the noise, which is getting much harder at the smallest features. But not everyone thinks this is the best solution.

“I’m very skeptical,” says Mack. “There are lots of good places to use AI in semiconductor manufacturing. Metrology is the wrong place because ultimately we have to trace our metrology results to a ground truth, and the best way to do that is with physics. The goal is to maximize the use of all the information in an image, and treat every pixel as valuable for adding to our understanding of what’s really on the wafer. AI does the opposite. It analyzes all that information to determine what can be thrown away and what can be used. Making use of all that information is a computational curve issue that has to be grounded in a strong physics-based approach.”

Vacuum system improvements
Another key area of improvement for the reliability of CD-SEM is in protection from contamination. When it comes to eliminating contamination of CD-SEM systems, vacuum chambers are the main point of system vulnerability. Contaminants can cause artifacts, which distort images and potentially damage sensitive components. If a single atom adheres to the electron beam source tip, it can partially block the emission of electrons, resulting in unstable operation.

XEI Scientific, which develops technology to clean SEMs and other vacuum systems, warns that “a persistent problem in scanning electron microscopy is the deposition of hydrocarbon contamination induced by an electron beam.”

To minimize this risk, CD-SEM tools are typically equipped with advanced vacuum monitoring and control systems, as well as redundant vacuum pumps. Applied Materials’ solution, for example, is to use an extreme, ultra-high-volume vacuum with specially developed chamber materials that greatly reduce the presence of contaminants. Special pumps help achieve a <1 x 10-11 millibar vacuum, which approaches the vacuum found in outer space. The technology also includes a cyclical self-cleaning process that continuously removes contaminants from the beam source.

Thinner films and 3D imaging
Above N5, stochastic effects can be identified using top-down images in 2D, but high-NA EUV requires a higher numerical aperture so the photons strike the wafer at a shallower angle, reducing the depth of focus. This requires thinner photoresist layers and reduces the aspect ratio of resist features, making them very hard to measure. CD-SEMs aren’t any more sensitive to the profile than they ever have been, and it gets harder to ensure adequate contrast between the bottom of the trench and the top of the CD. That requires increasing the number of electrons to get finer measurements, but it also increases sample damage risks and slows throughput. Add in the increasing noise levels and the challenges for CD-SEM at the N3 become even greater.

Another impact of the smaller features at N3 and below is that the full shape of the circuit pattern begins to have a larger influence on device performance. So in addition to controlling pattern width, which has been conventionally measured by CD-SEM, it also becomes necessary to control for the pattern shape dimensions — height, sidewall angle, and width variations at the bottom, middle and top of the feature — to a much greater detail than ever before.

Tilt CD-SEMs have been used for years to obtain images of a sample from multiple angles by tilting either the sample or the electron gun at an angle to get a parallax view of the surface, which can be used to create a three-dimensional (3D) representation of the sample circuits. The advantage of tilt CD-SEMs is the ability to measure the height, sidewall angle, and other critical dimensions. However, this process significantly reduces throughput and increases the risk of sample damage. Tilt CD-SEMs also have a lower resolution due to the need for a larger working distance, which in turn reduces the beam current density and limits their usefulness for N5 features and below.

As the feature sizes decrease, the signal-to-noise ratio decreases, as well, making it more difficult to obtain high resolution images with sufficient contrast to detail, especially for complex 3D structures. Maintaining throughput and resolution available for 2D imaging while capturing some information about 3D structures is possible with the detection and analysis of electron diffusion that is already occurring.

Hitachi recently published a paper demonstrating how this can be accomplished by measuring and analyzing the diffused electrons that occur during CD-SEM irradiation.[1]

“It is difficult to reconstruct the cross-sectional shape analytically from SEM image signals,” Hitachi says. “However, SEM image signals contain various shape information, and variations in the image signals suggest variations in cross-sectional shape. In other words, capturing variations in the SEM image signal makes it possible to detect variations in cross-sectional shape.”

When the primary electron beam of a CD-SEM irradiates the top of a pattern, electron diffusion occurs (backscattering and secondary electrons emission), and it is dependent on the 3D shape of the sample. This diffusion can be measured in terms of a signal contrast to estimate a trend of the cross-dimensional features of the circuit pattern.

Fig. 3: Simulation of electron diffusion when the primary electron beam was irradiated onto the pattern top. Source: Hitachi

Fig. 3: Simulation of electron diffusion when the primary electron beam was irradiated onto the pattern top. Source: Hitachi

A reference pattern was developed for four wafers prepared under different etching conditions that produced different cross-sectional shapes. Cross-sectional SEM and CD-SEM images were taken near the center of each wafer.

Fig. 4: Cross sectional images and CD-SEM image of evaluation target wafers. Source: Hitachi
 Fig. 4: Cross sectional images and CD-SEM image of evaluation target wafers. Source: Hitachi

Hitachi determined that in addition to conventional pattern width, middle-width variation trend can be detected by comparing and analyzing estimated variations of the electrons emission signal against the reference pattern.

Fig. 5: Results of measurement of top-middle gap by learning the relationship between pattern width and 3D-shape indices. Source: Hitachi

Fig. 5: Results of measurement of top-middle gap by learning the relationship between pattern width and 3D-shape indices. Source: Hitachi

These computational techniques will prove useful for cross-dimensional analyses, but there is still a huge challenge for accomplishing true 3D imagery with CD-SEMs, which will become necessary at the smallest nodes and for new 3D chipsets.

“Where the need for 3D really comes into play is in the memory technologies where things get super tall,” says Mack. “Imagine 3D NAND where you have these skyscrapers of memories piling up in there. We need to look at how things vary over a really long distance vertically. Being able to see the bottom of a hole drilled down through 128 layers of transistors is a giant challenge going forward.” 

Conclusion
CD-SEM isn’t going away. As with most semiconductor manufacturing tools and processes, the predicted limits for CD-SEM metrology are continually overcome, even with new challenges from ever smaller features at N5 and below. Whether it’s accounting for new challenges with noise, resolution, stochastics or computation, researchers and engineers keep finding new and creative ways to extend the usefulness of this vital metrology tool. Nevertheless, the problems are becoming more difficult and numerous, and the solutions more complex and time-consuming to develop and perfect.