Arthur Schaldenbrand Bloghttps://community.cadence.com/search?q=*%3A*&category=blog&users=2892&sort=date%20descSearch results for '*:*' by user ID 2892en-USZimbra Community 8noSearch results for '*:*' by user ID 2892Subscribe with My Yahoo!Subscribe with NewsGatorSubscribe with My AOLSubscribe with BloglinesSubscribe with NetvibesSubscribe with GoogleSubscribe with PageflakesThe Art of Analog Design Part 4: Mismatch Analysishttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/9XjDRT0cG30/the-art-of-analog-design-part-4-mismatch-analysisMon, 16 Oct 2017 00:38:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1340493Art3/cadence_blogs_8/b/cic/archive/2017/10/15/the-art-of-analog-design-part-4-mismatch-analysis0In Part 3 , we started to explore how to analyze the results of Monte Carlo analysis. In Part 4, we will consider the question, what is the relationship between process variation and the circuit’s performance variation? The tool for exploring the relationship process variation and circuit performance variation is mismatch analysis in the tool Virtuoso ® Variation Option (VVO). Let’s start by looking at a simple example that shows the sources of offset voltage of a two-pole operational amplifier, see Figure 1. Figure 1: Two Pole Operational Amplifier Looking at the design, we would expect that mismatch of the p-channel input transistors are the primary source of offset voltage. First, let’s look at the Monte Carlo simulation results for the op-amp, see Figure 2. Figure 2: Monte Carlo Analysis Results The results show that the offset voltage is ~7.3mV. While Monte Carlo analysis tells us how much offset voltage there is, it does not tell us anything about the source of the offset voltage or how much improvement can be achieved. So, what are the sources of the offset voltage? After Monte Carlo analysis, we can plot the relationship between threshold voltage of input p-channel transistors, M17 and PM5, and the n-channel transistors in the first stage load current mirror. The scatter plots in Figure 3 show that there is no correlation between threshold voltage and the offset voltage of the operational amplifier since the correlation between offset voltage and the device threshold voltages is effectively 0. Figure 3: Scatter Plots, Threshold Voltage versus Offset Voltage Now let’s try using contribution analysis, see Figure 4. Figure 4: Mismatch Analysis Results Mismatch analysis shows the relationship between the threshold voltage and the offset voltage. The reasons that the scatter plot showed no correlation was because it looks for linear correlation. Mismatch analysis reports that the dependency is second order, the label shows R^2, The results show that most of the variation, 99.997%, can be explained by the threshold variation of the M17, PM5, NM4, and NM6. The results also show that ~70% of the offset voltage variation is due to the p-channel variation, the contribution from M17 is 34%, and the contribution from PM5 is 34%. The other source of offset voltage variation is the n-channel threshold voltage contribution of 30%. Let’s use this information and see if we can improve the design. Since the p-channel contributes most of the offset voltage, we will try an experiment. We will increase the p-channel transistor area by 16x, length by 4x and width by 4x, keeping the W/L ratio constant. Increasing the device size should decrease the effect of p-channel mismatch by a factor of four. Figure 5: Monte Carlo Analysis with 16x P-Channel The effect of scaling the p-channel transistors on the offset voltage of the op-amp is to reduce the offset voltage from 7.2mV to 3.7mV. Doing some math, the p-channel offset contribution is ~6.4mV and the n-channel contribution is ~3.3mV. Verifying the offset voltage, the initial offset voltage is (6.4 2 ) + (3.3 2 ) = 7.2mV. After device sizing, the offset voltage is ((6.4/4) 2 ) + (3.3 2 ) = 3.7mV. This example shows how mismatch analysis can be used to understand the effect of process variation on circuit performance. While we understand qualitatively that input transistors are the primary contributor to offset voltage, mismatch analysis provides us a tool for qualitative analysis of variation. In the next blog, we will apply mismatch analysis to additional circuits.<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/9XjDRT0cG30" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/cic/archive/2017/10/15/the-art-of-analog-design-part-4-mismatch-analysisThe Art of Analog Design Part 5: Mismatch Analysis IIhttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/m39Ul63KCNE/the-art-of-analog-design-part-5-mismatch-analysis-iiFri, 13 Oct 2017 21:39:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1340491Art3/cadence_blogs_8/b/cic/archive/2017/10/13/the-art-of-analog-design-part-5-mismatch-analysis-ii0In Part 4 of the series, we looked at applying mismatch analysis as a design tool. In Part 5, we will continue to look at mismatch analysis by applying the technology to other types of designs.. The first case we will look at is a circuit without a DC operating point. A dynamic comparator, see Figure 1, doesn’t have a quiescent operating point making it difficult to analyze. In this case, the offset voltage is measured using transient analysis. A positive and a negative staircase is applied at the input and the input value which results in the output switching being recorded, the average value of input levels is the offset voltage. To increase the resolution of the offset voltage measurement, the step size needs to be small. In this case, the step size of the staircase ramp is 100mV. A Verilog A module was used as the signal source to generate the staircase, see Figure 2. For more details about measuring dynamic comparator offset Voltage, please see the ADC Verification Workshop Rapid Adoption Kit in Cadence online support. Looking at the comparator, we would expect that the mismatch of the p-channel input transistors is the primary source of offset voltage. After the Monte Carlo analysis, we will use scatter plots showing the random variable causing mismatch for three transistors: NM2, NM3, and NM4, see Figure 3a. For the devices in the differential pair, NM2 and NM3, we can see that there is correlation between the offset voltage and the input transistors, the correlation coefficient is r about 0.5. For the current source transistor, NM4, there is no correlation, the correlation coefficient r about 0, between the offset voltage and the transistor’s variation. So, the scatter plots are consistent with our expectations about how the devices are impacted and the statistical variation. Again, we can see the utility and the limitations of the scatter plot. Qualitatively the scatter plot allows us to visualize the relationship between the inputs, statistical variables, and the outputs measured values. However, it is difficult to extract quantitative information from the results. So, while we can use scatter plots to confirm what we already know, they don’t really provide any additional information to designers. We will use mismatch analysis to analyze the relationship between variations on offset voltage. The mismatch analysis results are shown in Figure 4. Again, we see that offset voltage has a non-linear, second-order relationship with the statistical variables. We can also see that most of the variation, 99.935% is accounted for by the mismatch results. We can see that ~90% of the offset voltage is due to the input transistor variation. Mismatch analysis considers the variation at the statistical variable level: NM2.rn2 contributes 30%, NM3.rn2 contributes 29%, NM2.rn1 contributes 17%, and NM3.rn1 contributes 16%. While our naming convention could be more explicit, you can think about the variables as the individual contributions to variation: gate oxide thickness variation and gate length variation. Another observation is that there is another source of offset voltage variation, the cascode transistors, NM0 and NM1. While not significant, it useful to know that mismatch analysis has enough resolution to identify small contributors. Mismatch analysis provides designers a tool to analyze the effect of mismatch qualitatively and quantatively. To summarize, the mismatch analysis is a useful tool to analyze the results of Monte Carlo analysis. In this case, we analyzed the effect of variation on a dynamic comparator. Traditionally it is difficult to analyze a dynamic comparator because it is not a linear circuit with a DC operating point. Perhaps more than anything else, the ability to analyze circuits that designers have not been able to analyze in the past is the true value of mismatch analysis.<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/m39Ul63KCNE" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/cic/archive/2017/10/13/the-art-of-analog-design-part-5-mismatch-analysis-iiThe Art of Analog Design: Part 3, Monte Carlo Samplinghttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/Ur-Pm7C49m8/the-art-of-analog-design-part-3-monte-carlo-samplingSat, 23 Sep 2017 05:25:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1340429Art3/cadence_blogs_8/b/cic/archive/2017/09/22/the-art-of-analog-design-part-3-monte-carlo-sampling0In Part 2, we looked at Monte Carlo sampling methods. In Part 3, we will consider what happens once Monte Carlo analysis is complete. Of course, we will need to analyze the results, so let’s look at some of the tools for visualizing what the Monte Carlo analysis is trying to show us about the circuit. First let’s review the results from the previous blog. The circuit being simulated is a Capacitor D/A Converter, or CAPDAC. The CAPDAC is used in a Successive Approximation ADC to generate the reference levels for comparison. The mismatch of the unit capacitors in the CAPDAC contributes to degradation of the CAPDAC SINAD (Signal-to-Noise and Distortion ratio) and is an important contributor in determining the overall SINAD of the ADC. This CAPDAC is used in a 10 Bit ADC. Based on the error budget for the ADC, if the CAPDAC has a SINAD of 60dB or better we will be able to meet our ADC SINAD target. The CAPDAC SINAD was simulated using Monte Carlo with auto-stop, yield target of 60dB for SINAD, yield of 3s or greater, confidence level of 90%, and Low Discrepancy Sampling, LDS, method. The simulation required 1755 samples to meet the 90% confidence requirement level. In the last blog append, we looked at the. The effect of process variation on SINAD distribution was plotted, see figure 1. To help understand the how CAPDAC performance compared to the specification,. The specificationthe pass/fail limits have been overlaid on top of the distribution, green is pass and red is fail. Figure 1: CAPDAC SINAD distribution The plot also has bars showing the mean value, s, and the values of standard deviation from -3σto +3σ allowing us to visualize how much margin the CAPDAC has relative to the specification. For the CAPDAC there is almost 2s close margin between the specification and the upper limit of the specification, -3s limit, of the distribution. One observation from looking at the distribution, is that the distribution appears to have a long tail. In statistics, distributions with long tails means that the distribution has a large number of occurrences far from the central part of the distribution. Looking at the distribution, we can see that on the positive side of the distribution, there is only one point that is > +2s from the mean. While on the negative side of the distribution, there are many data points, < -3s from the mean. Next, let’s apply another tool, quantile-quantile plotting. The purpose is to test our simulated distribution and is a Normal (or Gaussian) distribution. A quantile-quantile plot is a technique to evaluate if two distributions are the same by plotting their quantiles against each other where the quantiles are points taken at regular intervals from the cumulative distribution function (CDF) of a random variable. The 0-quantile of distribution is the median, it is the value where half the samples in the distribution are higher in value than the median and half of the samples in the distribution are lower in value the median. Since the distribution is skewed, the mean value will not be equal to the median value. Figure 2: Quantile-quantile plot for CAPDAC SINAD If the simulated distribution is a straight line when plotted against the reference distribution, the Normal distribution, then the distributions match and the simulated distribution is Gaussian. As expected, the simulated distribution is not a straight line when plotted against the Normal distribution (see Figure 2). The distribution is only Normal in the region from -1s to +1s of standard deviation. Another way to look at the effect of the long tail is to consider how the CAPDAC yield compares to the expected yield of a Normal distribution. For the CAPDAC, there is 1 failure for 1755 samples. The worst-case value of CAPDAC SINAD is 59.85dB, -5.2s from the mean value. Using the Normal distribution, the expected failure probability for 5s deviation from the mean value is 1 failure per 3.5 million attempts. The effect of the long tail, non-Normal nature of the distribution, is a significant reduction in the yield compared to the yield when the distribution is a Normal distribution. Using quantile-quantile plots provides a powerful tool for visualizing whether the simulated distribution is a Normal distribution or not. Next, let’s look at another measurement that is useful for designers. First, let’s determine the process capability index or Cpk value. The Cpk is a statistical measure of process capability which is the ability of a process to produce output within specification limits. For the CAPDAC, the Cpk is one of the outputs in the Virtuoso ADE Assembler results window (see Figure 3). The Cpk can only be output if a specification has been defined. The Cpk is defined as the ratio of the distance from the mean value to the specification in standard deviations over the distance from the mean value to the actual distribution limit in standard deviation. For the CAPDAC, the numerator is 4.6s, the distance from the mean value of 61.15dB to 60dB in sigma, see sigma to target. The target yield was 3s so the denominator is 3s. The less precise way to think about Cpk, is to think of it as a measure of design margin. It tells us how much margin we have between the actual limit of the process and the user’s expectation for the process. To summarize we have looked at two tools for visualizing the results of Monte Carlo analysis and using the tools to identify problems. Plotting distributions allows us to understand how well centered a design is. Quantile plots allow us to look at the distribution and identify if it has a long tail since a long tail can translate into poor yield. And by using Cpk we can quantify how much design margin we have. In the next blog post, we will start to look at what we can do to identify and correct issues.<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/Ur-Pm7C49m8" height="1" width="1" alt=""/>noIn Part 2, we looked at Monte Carlo sampling methods. In Part 3, we will consider what happens once Monte Carlo analysis is complete. Of course, we will need to analyze the results, so let’s look at some of the tools for visualizing what the Monte Carlo aIn Part 2, we looked at Monte Carlo sampling methods. In Part 3, we will consider what happens once Monte Carlo analysis is complete. Of course, we will need to analyze the results, so let’s look at some of the tools for visualizing what the Monte Carlo analysis is trying to show us about the circuit. First let’s review the results from the previous blog. The circuit being simulated is a Capacitor D/A Converter, or CAPDAC. The CAPDAC is used in a Successive Approximation ADC to generate the reference levels for comparison. The mismatch of the unit capacitors in the CAPDAC contributes to degradation of the CAPDAC SINAD (Signal-to-Noise and Distortion ratio) and is an important contributor in determining the overall SINAD of the ADC. This CAPDAC is used in a 10 Bit ADC. Based on the error budget for the ADC, if the CAPDAC has a SINAD of 60dB or better we will be able to meet our ADC SINAD target. The CAPDAC SINAD was simulated using Monte Carlo with auto-stop, yield target of 60dB for SINAD, yield of 3s or greater, confidence level of 90%, and Low Discrepancy Sampling, LDS, method. The simulation required 1755 samples to meet the 90% confidence requirement level. In the last blog append, we looked at the. The effect of process variation on SINAD distribution was plotted, see figure 1. To help understand the how CAPDAC performance compared to the specification,. The specificationthe pass/fail limits have been overlaid on top of the distribution, green is pass and red is fail. Figure 1: CAPDAC SINAD distribution The plot also has bars showing the mean value, s, and the values of standard deviation from -3σto +3σ allowing us to visualize how much margin the CAPDAC has relative to the specification. For the CAPDAC there is almost 2s close margin between the specification and the upper limit of the specification, -3s limit, of the distribution. One observation from looking at the distribution, is that the distribution appears to have a long tail. In statistics, distributions with long tails means that the distribution has a large number of occurrences far from the central part of the distribution. Looking at the distribution, we can see that on the positive side of the distribution, there is only one point that is > +2s from the mean. While on the negative side of the distribution, there are many data points, < -3s from the mean. Next, let’s apply another tool, quantile-quantile plotting. The purpose is to test our simulated distribution and is a Normal (or Gaussian) distribution. A quantile-quantile plot is a technique to evaluate if two distributions are the same by plotting their quantiles against each other where the quantiles are points taken at regular intervals from the cumulative distribution function (CDF) of a random variable. The 0-quantile of distribution is the median, it is the value where half the samples in the distribution are higher in value than the median and half of the samples in the distribution are lower in value the median. Since the distribution is skewed, the mean value will not be equal to the median value. Figure 2: Quantile-quantile plot for CAPDAC SINAD If the simulated distribution is a straight line when plotted against the reference distribution, the Normal distribution, then the distributions match and the simulated distribution is Gaussian. As expected, the simulated distribution is not a straight line when plotted against the Normal distribution (see Figure 2). The distribution is only Normal in the region from -1s to +1s of standard deviation. Another way to look at the effect of the long tail is to consider how the CAPDAC yield compares to the expected yield of a Normal distribution. For the CAPDAC, there is 1 failure for 1755 samples. The worst-case value of CAPDAC SINAD is 59.85dB, -5.2s from the mean value. Using the Normal distribution, the expected failure probability for 5s deviation from the mean value is 1 failure per 3.5 million attempts. The effect of the long tail, non-Normal nahttps://community.cadence.com/cadence_blogs_8/b/cic/archive/2017/09/22/the-art-of-analog-design-part-3-monte-carlo-samplingThe Art of Analog Design Part 2: Monte Carlo Samplinghttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/w6HshjBJPc0/the-art-of-analog-design-part-2-monte-carlo-samplingSat, 12 Aug 2017 15:08:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1340358Art3/cadence_blogs_8/b/cic/archive/2017/08/12/the-art-of-analog-design-part-2-monte-carlo-sampling0Historically, one of the great challenges that analog and mixed-designers face has been accounting for the effect of process variation on their design. Minimizing the effect of process variation is an important consideration because it directly impacts the cost of a design. From Pelgrom’s Law (1), it is understood that the device mismatch due to process variation decreases as the square root of increasing device area, see note 1. For example, to reduce the standard deviation, sigma, of the offset voltage from 6mV to 3mV, means that the transistors need to be four times larger. By increasing transistor size, the die cost is also increased since die cost is proportional to die (and transistor) area. In addition to increasing cost, increasing device area may degrade performance due to the increased device parasitic capacitances and resistances of larger devices. Or the power dissipation may need to increase to maintain the performance due to the larger parasitic capacitances of the larger devices. In order to optimize a product for an application, that is, for it to meet the target cost with sufficient performance, analog and mixed-signal designers need tools to help them analyze the effect of process variation on their design. Another way to look at the issue is to remember that analog circuits haven’t scaled down as quickly as digital circuits, that is, to maintain the same level of performance has historically required something like the roughly the same die area from process generation to process generation. So, while the density of digital circuitry doubles every eighteen months, analog circuits don’t scale at the same rate. If an ADC requires 20% of the die area at 180nm, then after two process generations at the 90nm process node the die area of the ADC and digital area are equivalent. After two more process generations at 45nm, the ADC requires 4x the area of the digital blocks, see note 2. The example that has been presented is exaggerated, however, the basic concept that process variation is an important design consideration for analog design is valid. Traditionally, the main focus of block-level design has been on parasitic closure, that is, verifying the circuit meet specification after layout is complete and parasitic devices from the layout have been accounted for in simulation. This focus on parasitic closure meant that there was only limited supported for analyzing the effect of process variation on design. During the design phase, sensitivity analysis allowed a designer to quantitatively analyze the effect of process parameters on performance. During verification, designers have used corner analysis or Monte Carlo analysis to verify performance across the expected device variation, environmental, and operating conditions. In the past, these analysis tools were sufficient because an experienced designer already understood their circuit architecture, its capabilities, and its limitations. So performance specifications could be achieved by overdesigning the circuit. However, ever decreasing feature size have increased the effect of process variation and market requirements meaning designers have less margin to use for guard banding their design. Also, the decreasing feature size means that power supply voltages are scaling down and in some cases circuit architectures need to change. An example of how power supply voltage effects circuit architecture is ADC design, where there has been a movement from pipeline ADC designs at legacy nodes, 180nm, to successive approximation ADC, SARADC, for advanced node, 45nm, designs. This change has occurred because a SARADC can operate at lower power supply voltages than pipeline ADCs. As a result of the changing requirements placed on designers, there is a need for better support for design analysis than ever before. Let’s look at an example of statistical analysis often performed by analog designers. Shown below is the Signal to Noise and Distortion Ratio, SNDR or SINAD, value of a Capacitor D/A Converter, CAPDAC. A CAPDAC is used in a successive approximation ADC to generate the reference voltage levels used to compare the input voltage to in order to determine the digital output code. The SINAD of the CAPDAC determines the overall ADC accuracy. Figure 1:Example of Monte Carlo Analysis Results for Capacitor D/A Converter Signal-to-Noise Ratio On the left is the distribution of the capacitance variation and one the right is the CAPDAC Signal-to-Noise Ratio, SNR, distribution. From the SNR distribution, the mean and standard deviation of the CAPDAC SNR can be calculated. If the specification of the SNR must be greater than 60dB, does this result mean that the yield will be 100%? Another question to consider is whether or not distribution for the SNR is Gaussian or not since the analysis of the results is impacted by the type of distribution. Or we might want to quantify the process capability, C pk . C pk is a parameter used in statistical quality to control to understand how much margin the design has. In the past, this type of detailed statistical analyses has not been available in the design environment. In order to perform statistical analysis, designers needed to export the data and perform the analysis with tools such as Microsoft Excel. Beginning in IC6.1.7, Cadence ® Virtuoso ® ADE Explorer was released with features to support a designer’s need for statistical analysis. Just a note, for detailed technical information, you can explore the Cadence Online Support website or contact your Virtuoso front-end AE. Now let’s take a quick look at enhancements to Monte Carlo analysis starting with the methods used to generate the samples for. In Monte Carlo analysis, the values of statistical variables are perturbed based on the distributions defined in the transistor model. The method of selecting the sample points determines how quickly the results converge statistically. Let’s start with a quick review, in the CAPDAC example we ran 200 simulations and all of them passed. Does that mean that the yield is 100%? The answer is no, it means that for the sample set used for the Monte Carlo analysis, the yield is 100%. In order to know what the manufacturing yield will be, we need to define a target yield, for example, let target yield greater than 3 standard deviations, or 99.73%, and define a level of confidence in the result of 95%. Then we can use a statistical tool called the Clopper-Pearson method to determine if Monte Carlo results have a >95% chance of having a yield of 99.73%. The Clopper-Pearson method produces an interval of confidence, the minimum and maximum possible yield, given the current yield, number of Monte Carlo iterations, etc. Often designers perform a number of simulations: 50, 100, etc. based on experience and assume that the results would predict the actual yield in production. By checking the confidence interval, we can reduce the risk of missing a yield issue. Another result of using the rigorous approach to statistical analysis, is that more iterations of Monte Carlo analysis are required. As a result, designers need better sampling methods that reduce the number of samples, Monte Carlo simulation iterations, required in order to trust the results. Random sampling is the reference method for Monte Carlo sampling since it replicates the actual physical processes that cause variation; however, random sampling is also inefficient requiring many iterations, simulations, to converge. New sampling methods have been developed to improve the efficiency of Monte Carlo analysis by more uniformly selecting sample points. Shown in Figure 2, is a comparison of samples selected for two random variables, for example, n-channel mobility and gate oxide thickness. The plots show the samples generated by random sampling and a new sampling algorithm called Low Discrepancy Sampling or LDS. Looking at the sample points, it is clear that LDS has more uniformity spaced sample points. More uniformly spaced sample points mean that the sample space has been more thoroughly explored and as a result the statistical results converge more quickly. This translates into fewer samples being required to correctly estimate the statistical results: yield, mean value, and standard deviation. Figure 2: Comparison of Random Variable values using Random Sampling and LDS Sampling The LDS sampling method replaces Latin Hypercube sampling because it is as efficient and supports Monte Carlo auto-stop. Monte Carlo auto-stop is an enhancement to Monte Carlo that optimizes simulation time. Statistical testing is used to determine if the design meets some test criterion, for example, for the CAPDAC, assume that you want to know with a 90% level of confidence that the SNR yield is greater than 99.73%. The user needs to define these criteria at the start of the Monte Carlo analysis and the results are checked after every iteration of the Monte Carlo analysis. The analysis stops if one of two conditions occurs. First, the analysis will stop if the minimum yield from the Clopper-Pearson method is greater than the target criteria, that is, the SNR yield is greater than 99.73%. More importantly, the Monte Carlo analysis will also stop if Virtuoso ADE Explorer finds that the maximum yield from the Clopper-Pearson method will not exceed 99.73%. Since failing this test means that the design has an issue that needs to be fixed, this result is also important. It also turns out that failure usually occurs quickly, after a few iterations of the simulation. As a result, using statistical targets to automatically stop Monte Carlo can significantly reduce the simulation time. In practice, what does this look like? Consider the following plot in Figure 3 which shows the upper bound, the maximum yield, and lower bounds, minimum yield, and the estimated yield of the CAPDAC as a function of the iteration number. The green line is the lower bound of the confidence interval assuming the user would like to represent the estimated yield By the 300 th iteration, we know that the yield is greater than 99% with a confidence level of 90%. Or we can be very confident that the CAPDAC yield will be high. In addition, thanks to Monte Carlo auto-stop we only needed to run the analysis once. Figure 3: Yield Analysis Plot To summarize, the two improvements to Monte Carlo sampling are LDS sampling and Monte Carlo auto-stop. LDS sampling uses a new algorithm to more effectively select the sampling points for Monte Carlo analysis. Monte Carlo auto stop uses the statistical targets: yield and confidence level, to determine when to stop the Monte Carlo analysis. As a result of these two new technologies, the amount of time required for Monte Carlo analysis can be significantly reduced. In the next article, we will look into analyzing Monte Carlo analysis results to better understand our design and how to improve it. Note 1: Remember in analog design, designers rely on good matching to achieve high accuracy in their designs. Designers can start with a resistor whose absolute accuracy may vary +/-10% and taking advantage of the good relative accuracy, matching between adjacent resistors, to achieve highly accurate analog designs. For example, the matching between adjacent resistors may be as good as 0.1%, allowing design data converters of 10 bit, 1000parts per million (ppm), 12 Bit, 0.00025ppm, or even 14 Bit, 00001ppm, accuracy circuits. Note 2: In reality, only the components in the design sensitive to process variation do not scale, so the area of the digital blocks will scale and the area of some of the analog blocks may scale. The solution designers typically adopt to maintain scaling, is to implement new technologies, such as, digitally assisted analog (DAA) design to compensate for process variations. While adopting DAA may enable better scaling of the design, it also increases schedule risk and verification complexity. References: 1) M.J.M. Pelgrom, A.C.J. Duinmaijer, and A.P.G. Welbers, “Matching properties of MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 24, pp. 1433-1439, October 1989. 2) See Clopper-Pearson interval http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/w6HshjBJPc0" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/cic/archive/2017/08/12/the-art-of-analog-design-part-2-monte-carlo-samplingThe Art of Analog Design Part 1: Overview of Variation-Aware and Robust Designhttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/WsmSxukcIsg/the-art-of-analog-design-part-1-overview-of-variation-aware-and-robust-designThu, 10 Aug 2017 06:18:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1340355Art3/cadence_blogs_8/b/cic/archive/2017/08/09/the-art-of-analog-design-part-1-overview-of-variation-aware-and-robust-design1In this series, we will focus on advanced concepts for custom IC design, in particular, variation-aware design (VAD). With emergence of high-speed simulators such as Spectre ® APS, designers can now run simulations faster than ever before, so they are able to more completely verify their designs before taping out. However, it requires more than verifying the proper functionality for different stimulus and performance across corner conditions to assure a design is successful. To be successful requires more, it requires properly allocating design margins based on process variation. Designers can not only use the Cadence ® Virtuoso ® ADE Product Suite to analyze the results and verify the design is specification compliant, reducing the risk of a design respins and getting the product to market faster. It can also increase competitiveness by helping designers reduce the effect of process variation on a design. Solving this problem requires more than fast simulation, it requires adopting new tools and methodologies. First, let’s consider the impact of over margining to avoid the negative effects of process variation on circuit performance. For example, let’s say we are designing a successive approximation ADC and find that the linearity of the capacitor digital-to-analog converter, CAPDAC, used to generate reference values, limits yield to 90%. Also assume that for the current design, the CAPDAC is 25% of the die area and there are 1000 die/wafer. If we can increase the yield to 99% by doubling the CAPDAC area, should we do it? Working through the numbers, we see that the current design has 900 good die per wafer while the high-yield design has 792 good die wafer, 800 die/wafer * 99% yield. So even though the yield went up, profit will go down. There are two points to consider: Overdesign, designing with a margin is not free. Allowing too much design margin can hurt competitiveness. The second point is subtler, to borrow from Mark Twain, “There are three kinds of lies: lies, damn lies, and statistics.”, that is, we are relying heavily on statistical analysis to make critical decisions. What type of simulation was performed to generate the yield numbers generated? Should the results of these simulations be trusted? These are questions that we also need to consider when making the decision on which design to take to production. In the first part of this series of articles, we will explore variation-aware design. The question to be considered is how to balance the conflicting requirements immunity to process variation against the cost in terms of product competitiveness. In the second half of these articles, we will explore reliability analysis for devices and interconnect. Again, this is an area where designers have traditionally relied on allocating design margin and overdesign to prevent issues. The question to be considered is, as the importance of designing for automotive applications and industrial and infrastructure applications grows, do we have enough design margin? Automotive designs operate in harsher environments and may need to operate reliably for years after a consumer product would have been recycled. The need for these types of solutions has been anticipated and these capabilities already exist in the design environment. In the next article, we will look into Monte Carlo sampling methods to see how we can minimize the number of simulations required to answer the question of what yield is for the circuit.<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/WsmSxukcIsg" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/cic/archive/2017/08/09/the-art-of-analog-design-part-1-overview-of-variation-aware-and-robust-designMeasuring Bipolar Transistor ft with Fixed Base-Collector Voltagehttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/5VGqFC0t0AA/measuring-bipolar-transistor-ft-with-fixed-base-collector-voltageTue, 12 Jun 2012 12:54:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1311527Art3/cadence_blogs_8/b/rf/archive/2012/06/12/measuring-bipolar-transistor-ft-with-fixed-base-collector-voltage2Recently I had a question from reader. He asked a good question: "How do you to measure a bipolar transistor's ft when the base-collector voltage, Vbc, is fixed?" Attached is a modified version of the testbench that allows a user to measure ft with a fixed Vbc. While the aesthetics are not as pleasing as the original testbench, it does the job. The testbench is shown in Figure 1. The base of the bipolar transistor, the DUT, is grounded. The collector of the transistor is connected to a dc source, VBC, which is used to set the base-collector voltage of the transistor. The emitter is connected to a current source that sets the bias current, IE. An additional supply, VBE, is included to assure the base-emitter junction is always forward biased. For these tests, the dummy power supply voltage, VBE, is set to 5V. Figure 1: Ft Testbench modified for fixed Vbc To measure the ft, use the same methodology previously described : 1. Run a dc operating point analysis and save the collector current 2. Run an ac analysis, sweep the frequency beyond the maximum value of ft a. In this case, the ac sweep was from 1Hz to 10GHz b. Save the base and collector currents 3. Use the Virtuoso ViVA waveform calculator to measure the ac beta of the transistor a. The ac beta is ic/ib, where ic and ib are the ac currents 4. Use the Virtuoso ViVA waveform calculator cross() function to measure the ft a. Measure the frequency where the value of the ac beta=1, or 0dB 5. In Virtuoso Analog Design Environment, ADE, setup a parametric plot to sweep the emitter current a. In this case the emitter current was swept from 100nA to 10mA 6. Run Parametric Analysis 7. Plot the collector current and the ft when the analysis completes 8. Use the Y vs Y option to plot the ft vs the collector current Shown below is an example of the ft curves for the NPNupper transistor model used in the rfLib. The ft was measured for current sweeps using different values of Vbc: 0.5V, 1.0V, and 1.5V. As you can see, increasing the base-collector voltage delays the onset of saturation and allows the transistor to achieve higher ft. Figure 2: ft vs Ic for a fixed Vbc Please let me know if this post was useful, if you have any questions, or comments. Art Schaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/5VGqFC0t0AA" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2012/06/12/measuring-bipolar-transistor-ft-with-fixed-base-collector-voltageComment on Periodic Steady-State Analysis for DC-to-DC Convertershttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/VBHLDzW6UiA/periodic-steady-state-analysis-for-dc-to-dc-convertersMon, 11 Jun 2012 14:56:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:2315Art3/cadence_blogs_8/b/rf/archive/2009/06/30/periodic-steady-state-analysis-for-dc-to-dc-convertersAmir, Information about your circuit, your testbench, and the simulator options that you have tried is always useful to provide when asking for assistance with debugging. Here are some things that I look at and have useful in the past. 1) Plot the pulse width versus time to see if the circuit really is in steady state a. My experience is that DC-to-DC Converters need to be closer to periodic steady-state than RF circuits such as an LNA b. You should also check that the pulse width settles to steady state and is not oscillating, see #4 c. Use the IC61 ViVA pulse width function to plot the pulse width vs. time 2) Replace Verilog-A standard cells with transistor level equivalents In general Verilog-A logic gates have non-physical behavior and transistor level gates seem to converge better 3) How ideal is the circuit? Often early in the design process, designs are “idealized” and simplified. For a dc-to-dc converter, simplification can cause issues. a. For example, not including circuitry to suppress shoot thru combined with ideal switches can cause extremely large currents in the output stage. This behavior is non-physical and can cause convergence issues. b. Does the testbench include the EMI filter? A dc-to-dc converter may oscillate when supplied by an ideal power supply, using an EMI filter will suppress the oscillations 4) Even though the circuit seems to work in transient, it maybe have issues that make it difficult to achieve periodic steady state. For example, under certain conditions, dc-to-dc converters may oscillate and this can cause the PSS analysis to fail. 5) If you are using the moderate error preset, try setting the method to gear2 only 6) You did not describe what you are doing with the tolerances. My experience is that most designers tighten the tolerances and for transient analysis that often helps. It is also useful when performing RF analyses since IP3 measurements require high resolution in order to distinguish distortion tones from the numerical noise floor. However for dc-to-dc converters, setting the tolerances too tight often over-constrains the simulator and can cause convergence issues. For PSS analysis, there is an additional tolerance parameter that effects convergence of the periodic steady state, steadyratio a. Steadyratio is the tolerance parameter used to determine whether the circuit has reached steady state or not b. In general, if you tighten the tolerances, then you need to loosen the steadyratio or you risk over-constraining PSS analysis c. In general for a well behaved circuit, the default value for the steadyratio should be produce good results d. You might want to try using the default (moderate) or relaxed(liberal) error preset and relax the steadyratio, for example, steadyratio=1 e. If the circuit converges, then you can take advantage of Spectre save/restart to gradually tighten the tolerances i. Set steadyratio=1, save the results ii. Set steadyraio=0.1, restart with the result from steadyratio=1 and save the results iii. Continue until you get to convergence 7) Finally, you could always talk to your support team and they can escalate issues to R & D for you<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/VBHLDzW6UiA" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2009/06/30/periodic-steady-state-analysis-for-dc-to-dc-convertersMeasuring Fmax for MOS Transistorshttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/Qqj288LzJb4/measuring-fmax-for-mos-transistorsThu, 11 Aug 2011 12:00:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1292802Art3/cadence_blogs_8/b/rf/archive/2011/08/11/measuring-fmax-for-mos-transistors2The following question has come up in comments: "How do I measure F max for an MOS transistor?" The measurement methodology -- testbench, analysis, calculator setup, stimulus, etc.-- does not change whether you are measuring bipolar transistors or MOS transistors. On the other hand, the results for MOS transistors often come out looking wrong, or more correctly, non-physical. Before scratching your head, adjusting your testbench or doing anything else, you need to consider the model that you are using. For review the fmax testbench is shown below. The testbench has two control loops -- a dc control loop that controls the drain current, and an ac loop that for measuring the s-parameters of the transistor. The control loops are isolated using inductors (dc short, ac open) and capacitors (dc open, ac short). You could use analysis-dependent switches in place of the inductors and capacitors if you prefer. Figure 1: Fmax Testbench Using this testbench, let's explore some different approaches to modeling a MOS transistor and see what happens. We will look at three different device modeling approaches: 1) Using the standard bsim3v3 model 2) Using the standard bsim3v3 model with RF extensions. The BSIM3v3 model does not account for the extrinsic elements of the MOS transistor that can affect the RF performance of the transistor, for example, the resistance of the gate, the substrate resistance, etc. 3) Use the bsim4 model. The bsim4 model includes the extrinsic components within the model. We won't discuss the details of device modeling in this blog, if you are interested, you can find more information in the reference[1]. Please note that approach 2 and approach 3 are equivalent methods of implementing the model extensions discussed in the reference. To compare the models, we will start by simulating the maximum unilateral gain in order to find the F max , The results are shown in Figure 2 below. Let's look at what the simulation results are telling us about the transistor models. The results for the default bsim3v3 model look non-physical since the maximum unilateral gain has large peaks in the response at frequencies above 10GHz and the response does not roll off until almost 100GHz. However, both the bsim3v3 with RF extensions and the bsim4 model show the results we would expect, the gain is flat at low frequencies and rolls-off at high frequencies. One additional comment about the simulation results. Due to some PDK limitations, the bsim3v3 models are from a 180nm feature size PDK, while the bsim4 data is from a 45nm feature size PDK. So the simulated F max is different due to process scaling and not due to differences in the modeling approach. For devices from the same PDK modeled using the two approaches, the F max should be consistent. Figure 2: Comparing the Maximum Unilateral Gain In previous blog posts, we have discussed the good things that simulation allows you to do, that is, perform measurements that you cannot perform in the real world. Idealizing testbench behavior or, more correctly, including exactly the phenomena that the designer specifies, is good when creating testbenches. The simulation will ignore all the higher order phenomena that degrade measurement accuracy. So, for example, we can measure f t directly in simulation instead of extracting it from s-parameters as we would have to do if we tried to measure it in the lab. On the other hand, simulation also ignores all the higher order device behavior that designers do not specify. As a result, effects that can degrade design performance are ignored. The solution is to improve model fidelity, which will also increase model complexity and simulation time. So designers need to make a trade-off between how accurately to model a transistor's characteristics and their objectives when simulating. While an RF designer may want to use RF models, not everybody needs them. For example, if you are designing a Band-Gap Reference, then you probably don't need to use an RF model; you are more interested in modeling the effect of process variation on the circuit. In summary, simulating the F max of a MOS transistor is similar to simulating the F max of a bipolar transistor. As we discussed, you can use the testbench to perform sanity checks on your models to verify that they are appropriate for your application or select the best component from the PDK for your application. You can also use the testbench to optimize the performance for your operating conditions, that is, trade-off gate length and gate width to give the best F max or Noise Figure for the given bias conditions. Best Regards, Art Schaldenbrand References: [1] BSIM4v4.7 MOSFET Model User's Manual, Morshed et al., Chapter 9, High Speed/RF Models, page 75-84 http://www-device.eecs.berkeley.edu/~bsim3/BSIM4/BSIM470/BSIM470_Manual.pdf<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/Qqj288LzJb4" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2011/08/11/measuring-fmax-for-mos-transistorsMeasuring Transistor fmaxhttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/zLx4YXsViUM/measuring-transistor-fmaxTue, 07 Dec 2010 19:00:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1246058Art3/cadence_blogs_8/b/rf/archive/2010/12/07/measuring-transistor-fmax4There were several questions about measuring transistor f max in comments posted to my previous Measuring Transistor f t and Simulating MOS Transistor f t blog posts. So in this posting we will look at simulating transistor s-parameters and device characteristics including f max , noise, and distortion. There are two parts to the characterizing a device -- creating the testbench and performing the measurement. First, we will look at creating a testbench to measure transistor s-parameters. While we can't directly use the f t testbench to measure s-parameters, it will serve as the basis for the s-parameter testbench. The current feedback loop from the f t testbench will be used to define the transistor's dc operating point. Then we will add ports to the testbench in order to measure the transistor's s-parameters. The ports define the reference impedance and the port number for s-parameter analysis. The complexity is that we need to isolate the current feedback to stabilize the dc operating point from the ports used for s-parameter analysis. To isolate the dc and the ac signal paths, the dc paths include shorts and the ac paths include capacitors. The corner frequency of LC network is set low enough so that frequency sweeps can be performed from frequencies as low as 1Hz (see Figure 1). Figure 1: f max Testbench Next, let's talk a little bit about how to perform the f max measurement using Virtuoso Analog Design Environment (ADE). We will use Spectre's s-parameter analysis to simulate the transistor's s-parameters and then calculate f max from the s-parameter data. We will calculate the f max from the s-parameters using Mason's Unilateral Power Gain. Let's look at the process step-by-step. 1) First, we will perform s-parameter analysis. We will start by selecting the input and output ports, in this case port1 and port2. Figure 2: Setting Up s-parameter analysis 2) In order to improve the accuracy of the measurement, we will use 100 points/decade instead of the default value, 20 points/decade. Increasing the number of points reduces the interpolation error when we make the f max measurement using the cross() function. 3) ADE can calculate the Unilateral Power Gain from the device's s-parameters. The Maximum Unilateral Power Gain measurement is available from either of the following options: a. From ADE select R esults --> Direct P lot --> Main Form..., then in the sp analysis section choose Gumx Figure 3: S-parameter Direct Plot b. From ADE select T ools --> C alculator..., then select gumx from RF functions 4) In our case, we will use the ViVA Calculator because we want to know the frequency now that the Unilateral Power Gain is 0dB. This measurement can be done using the cross() function. In this case, we have saved Maximum Unilateral Power Gain and the f max measurement, and the cross(dB10(Gumx() 0 1 "falling" nil nil) as outputs in ADE. Figure 4: ADE with f max measurement 5) If you have ever done the measurement in the lab, you probably did not measure the 0dB crossing -- you extrapolated from a higher level to the 0dB crossing due to measurement noise. Simulating f max is different than measuring f max and as a result, when simulating, we can directly measure f max . We do not need to extrapolate to estimate the 0dB crossing as you would in the lab. 6) On the other hand, the accuracy of the f max simulation is affected by how well you model the actual device. For example, using a BSIM4 model with gate resistance, substrate resistance, ... Once the simulation is complete we can begin to measure the f max from the G umx gain plot (see Figure 5). Figure 5: Calculating f max from G umx Using ADE's Parametric Plotting function (see the Measure Twice, Cut Once post for details) we can sweep the operating conditions and see the effect on f max (see Figure 6). Designers can use this information to optimize the speed/performance of their design. Figure 6: f max vs. collector current To review, in this post we have looked at how to simulate the f max of a transistor. This testbench and methodology is based on s-parameter simulation. Any transistor parameter that you might wish to measure using s-parameters can be simulated -- for example, noise figure or IIP3. I hope you found this post useful. Please let me know if you have any questions. Best Regards, Art Schaldenbrand Related Resources: Virtuoso Analog Design Environment Virtuoso ADE Product Suite<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/zLx4YXsViUM" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2010/12/07/measuring-transistor-fmaxMeasure Twice, Cut Once for Transistor fthttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/0dO_jNht70U/measure-twice-cut-onceWed, 06 Oct 2010 13:00:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1179380Art3/cadence_blogs_8/b/rf/archive/2010/10/06/measure-twice-cut-once2Recently there was an inquiry about the methodology for performing the f t (transition frequency) versus Ic measurement described in my Measuring Transistor f t blog post from July 2008: By bid75 on September 8, 2010 I am unable to understand how ft vs. Ic plot is generated. How do you do a nested sweep of dc bias current and ac analysis to determine ft at each bias current? Initially, I was just going to fire off a quick response. However, after thinking about the question, it seemed like a topic that needed to be explored in more detail. So you are going to get this appended posting (and a really cool title). In answer to the question, the tool that performs the nested sweep is the parametric analysis in Virtuoso Analog Design Environment -- specifically, it's a feature of ADE-L. I think that parametric analysis is a useful tool and hopefully after reading this posting you will too. In this case, parametric analysis will be used to perform a nested sweep, sweeping the f t measurement across bias current. Remember that the f t measurement includes a frequency sweep. Parametric analysis is also useful for performing a what-if analysis to better understand design trade-offs. To enable parametric analysis in ADE, select T ools --> P arametric Analysis ... and the Parametric Analysis window will open, assuming you are using the Wilson current mirror based testbench. 1) Select the variable to sweep, I CE 2) Select the variable sweep, from X A [1µA] to Y A [10mA] with Z [3] steps / decade Note: You will need to adjust the range based on the device that you are analyzing 3) To run the analysis, click on the green arrow 4) When the simulation is complete plot the results, f t and Ic Note: You will need to change the X-axis variable from the swept variable I CE to collector current, Ic Figure 1: Parametric Analysis Setup Measuring f t is a simple application of parametric analysis. Next, let's look at some other applications. First, we will look at one common challenge designers face as power supply voltages scale down -- understanding the input common mode range of their designs. Different people have different figures of merit for the input common mode range of an operational amplifier. Here we will define the input common-mode range as the input common levels that the dc (maximum) value of the open-loop gain falls by 3dB from the peak value (see figure 2). Parametric analysis makes it easy to visualize the input common-mode range of the amplifier. Not only can we measure the values, we also get a qualitative feel for the how much margin we have before the amplifier fails. Figure 2: Parametric Analysis Results for Input Common-Mode Range Lastly, we will apply parametric analysis to a more complex measurement. Suppose that you would like to understand the limits of the dynamic performance of an A/D Converter-- for example, measure the Effective Resolution Bandwidth, ERBW. The Effective Resolution Bandwidth is the input frequency at which the SINAD at full scale falls by 3dB compared to the SINAD at dc. It is a useful figure of merit to measure the conversion bandwidth of an A/D Converter. Shown in figure 3 is an example of simulating the Effective Resolution Bandwidth of a five bit A/D Converter. By nesting this sweep inside of other sweeps, we can analyze the effect of circuit operating conditions on circuit performance -- for example, the effect of power supply voltage or temperature variations on the bandwidth of an A/D Converter. One comment is that you need to properly parameterize your testbench and the appropriate sweep variable when using parametric analysis. We will save the discussion of how to properly parameterize a testbench for another posting. Figure 3: Flash ADC SINAD as a function of frequency So the summary is that you can use parametric analysis to perform the nested sweep for analyzing f t . However, as we have discussed, there are many other applications of parametric analysis. Hope this posting was useful. As always, please let me know if you have any questions or comments! Best Regards, Art Schaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/0dO_jNht70U" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2010/10/06/measure-twice-cut-onceAnalyzing Distortion With Spectre RFhttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/zUwUdjN3kOQ/analyzing-distortion-with-spectre-rfFri, 18 Dec 2009 15:30:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:24074Art3/cadence_blogs_8/b/rf/archive/2009/12/18/analyzing-distortion-with-spectre-rf5Greetings, In the previous appends, we looked at using Shooting Newton Periodic Steady-State analysis to analyze analog circuits. In this append, we will look at using Harmonic Balance Periodic Steady-State, HBPSS, to analyze analog circuits. HBPSS is widely used for RF and microwave circuit design. However, designers often do not realize that it can also be useful for analog circuit design, in particular, when they would like to analyze distortion. As an example, we will simulate the Total Harmonic Distortion, THD, of an amplifier. We will compare and contrast using transient analysis with the Fourier transform and using HBPSS to analyze distortion. The test circuit is a simple Audio Amplifier for headphones built from an LM386 op-amp, shown in Figure 1. Figure 1: LM386 Audio Amplifier Typically, transient analysis with the Fourier transform is used to simulate the THD of an audio amplifier. The challenge with using transient analysis is to optimize the transient analysis simulation configuration for accurate Fourier analysis[1]. Fourier analysis requires that the circuit has reached sinusoidal steady-state, that is, we need to measure the response after the start-up transient of the system has completed settling. Achieving sinusoidal steady-state can require settling for many periods in audio designs because of the large time constants due to the large off-chip capacitors for dc blocking. Of course performing Fourier analysis can alter the spectrum of the amplifier unless designers are careful with their simulation and Fourier analysis setup. To illustrate the limitations of Fourier analysis and the benefit of steady-state analysis for this application, the several simulations were run. In each case the THD was calculated for one period of the fundamental frequency, in this case 1kHz. Four transient simulations were performed with different amounts of delay allowed to settle the start-up transients of the circuit before performing the Fourier analysis. The delay times were: 0 periods of the fundamental frequency, 1 period of the fundamental frequency, 3 periods of the fundamental frequency, and 10 periods of the fundamental frequency. The THD for each simulation condition is shown is Table I. In this case, the simulation is performed using the Spectre's conservative error preset. The conversion from the time domain to the frequency domain was performed using the ViVA Waveform Calculator FFT function and the Spectre Fourier Integral. Table 1: THD Results for Various Simulation Conditions Some observations about the simulation results, As expected, the simulated THD is sensitive to the delay time. The longer the delay time the closer the amplifier is to sinusoidal state and the more accurate the Fourier analysis. After about 10 periods, the amplifier has reached sinusoidal steady-state and the results for the Fourier Integral and FFT are consistent with HB PSS analysis. In this case, the HBPSS analysis was performed based on the dc operating point of the circuit, transient-assisted harmonic balance analysis was not required. For this simple example, the simulation time using harmonic balance PSS analysis is >5x faster than using transient analysis with the Fourier Transform. As circuit become larger and especially for post-layout simulations, we would expect to see that the difference in the transient analysis time and the dc operating point calculation become larger and HBPSS becomes even more effective. Reducing simulation enables designers to analyze THD across process variations, with corner and Monte Carlo analysis, or to optimize THD. One question maybe why didn't we use Shooting Newton for the periodic steady-state analysis? The short answer is that Shooting Newton is not required in this case. Harmonic Balance analysis provides the steady state solution in terms of finite Fourier series and is very effective for simulating distortion. If time domain waveforms were more non-linear, for example, when simulating a Switched Capacitor circuit or a DC-to-DC Converter then Shooting Newton would be appropriate. To help illustrate the need to settle the initial start-up transient, I have plotted the non-periodicity, on of the outputs of the Spectre's Fourier Integral analysis, as a function of settling time, see Figure 2. The non-periodicity measures the difference between the initial value and final value. When the response is in sinusoidal steady-state the non-periodicity will be 0. Figure 2: Effect of Settling Time on Periodicity This approach, using harmonic balance analysis for periodic steady-state analysis to supplement transient analysis with the FFT, can be applied whenever you need to measure the distortion of a linear amplifier. In the next append, we will look at extending this approach to using PSS for distortion analysis of non-linear circuits, for example. Hope you found this append useful, please let me know! Art Schaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/zUwUdjN3kOQ" height="1" width="1" alt=""/>noGreetings, In the previous appends, we looked at using Shooting Newton Periodic Steady-State analysis to analyze analog circuits. In this append, we will look at using Harmonic Balance Periodic Steady-State, HBPSS, to analyze analog circuits. HBPSS is widGreetings, In the previous appends, we looked at using Shooting Newton Periodic Steady-State analysis to analyze analog circuits. In this append, we will look at using Harmonic Balance Periodic Steady-State, HBPSS, to analyze analog circuits. HBPSS is widely used for RF and microwave circuit design. However, designers often do not realize that it can also be useful for analog circuit design, in particular, when they would like to analyze distortion. As an example, we will simulate the Total Harmonic Distortion, THD, of an amplifier. We will compare and contrast using transient analysis with the Fourier transform and using HBPSS to analyze distortion. The test circuit is a simple Audio Amplifier for headphones built from an LM386 op-amp, shown in Figure 1. Figure 1: LM386 Audio Amplifier Typically, transient analysis with the Fourier transform is used to simulate the THD of an audio amplifier. The challenge with using transient analysis is to optimize the transient analysis simulation configuration for accurate Fourier analysis[1]. Fourier analysis requires that the circuit has reached sinusoidal steady-state, that is, we need to measure the response after the start-up transient of the system has completed settling. Achieving sinusoidal steady-state can require settling for many periods in audio designs because of the large time constants due to the large off-chip capacitors for dc blocking. Of course performing Fourier analysis can alter the spectrum of the amplifier unless designers are careful with their simulation and Fourier analysis setup. To illustrate the limitations of Fourier analysis and the benefit of steady-state analysis for this application, the several simulations were run. In each case the THD was calculated for one period of the fundamental frequency, in this case 1kHz. Four transient simulations were performed with different amounts of delay allowed to settle the start-up transients of the circuit before performing the Fourier analysis. The delay times were: 0 periods of the fundamental frequency, 1 period of the fundamental frequency, 3 periods of the fundamental frequency, and 10 periods of the fundamental frequency. The THD for each simulation condition is shown is Table I. In this case, the simulation is performed using the Spectre's conservative error preset. The conversion from the time domain to the frequency domain was performed using the ViVA Waveform Calculator FFT function and the Spectre Fourier Integral. Table 1: THD Results for Various Simulation Conditions Some observations about the simulation results, As expected, the simulated THD is sensitive to the delay time. The longer the delay time the closer the amplifier is to sinusoidal state and the more accurate the Fourier analysis. After about 10 periods, the amplifier has reached sinusoidal steady-state and the results for the Fourier Integral and FFT are consistent with HB PSS analysis. In this case, the HBPSS analysis was performed based on the dc operating point of the circuit, transient-assisted harmonic balance analysis was not required. For this simple example, the simulation time using harmonic balance PSS analysis is >5x faster than using transient analysis with the Fourier Transform. As circuit become larger and especially for post-layout simulations, we would expect to see that the difference in the transient analysis time and the dc operating point calculation become larger and HBPSS becomes even more effective. Reducing simulation enables designers to analyze THD across process variations, with corner and Monte Carlo analysis, or to optimize THD. One question maybe why didn't we use Shooting Newton for the periodic steady-state analysis? The short answer is that Shooting Newton is not required in this case. Harmonic Balance analysis provides the steady state solution in terms of finite Fourier series and is very effective for simulating distortion. If time domain waveforms were more non-linear, for example, when simulatinhttps://community.cadence.com/cadence_blogs_8/b/rf/archive/2009/12/18/analyzing-distortion-with-spectre-rfPeriodic Steady-State Analysis for DC-to-DC Convertershttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/VBHLDzW6UiA/periodic-steady-state-analysis-for-dc-to-dc-convertersTue, 30 Jun 2009 11:30:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:18703Art3/cadence_blogs_8/b/rf/archive/2009/06/30/periodic-steady-state-analysis-for-dc-to-dc-converters10In " Spectre RF by any other name ...", a non-RF application for Spectre RF's periodic steady-state analysis was introduced. An example of using periodic steady-state analysis [PSS] to simulate the dynamic performance: THD and SFDR, of a switched-current Digital-to-Analog Converter [DAC] was presented. In this append, we will look at using periodic steady-state analysis for another non-RF application, switching regulator simulation. Switching regulators are the core of switched-mode power supplies [SMPS] and are interesting because they are used in most power supplies, including the high efficiency power supplies required mobile applications. Let's begin by considering a simple switching regulator design, a buck-down converter for converting from 12V to 5V, shown in Figure 1 . The design is a voltage-mode, continuous conduction mode switching regulator. The control block: reference voltage generator, error amplifier and compensation, drives a pulse-width modulator: ramp generator, comparator, and switch. The output of the switch is filtered by an LC tank and feedback to the control block. The duty cycle of the pulse-width modulator determines the output voltage of the regulator. The inductor and capacitor non-idealities [self-resonance frequency, ESR, ...] are modeled but not shown. Finally an EMI filter has been included in the design. Figure 1: Buck-Down Converter schematic First, let's look at the dynamic response of the regulator. After settling the start-up transient, the regulator operates at the frequency of the ramp generator. When operating at steady-state, the dc level is 5.002V and there is ripple on the regulated output voltage, ~+/-7mV. The transient response of the regulator is shown in figure 2 . Figure 2: Buck-Down Converter transient response While transient analysis can be used to verify the overall performance of the circuit, it is difficult to analyze the circuit's performance in the time domain using transient analysis, for example, consider the challenge of trying to simulate the phase margin and gain margin of the control loop. Ideally we would like to be able to use simulation to improve the buck-down converter design in the same way that using ac, noise, stability analysis can be used for design of linear circuits. However, linear analysis can not be directly applied to switching regulator designs so we need to find a new methodology for analyzing the switching regulator. Since the switching regulator has a periodic steady-state, we will apply the periodic steady-state analysis technology in Spectre RF. In this case, a source is used to generate the ramp so driven periodic steady-state analysis is used. The complete setup for PSS analysis is shown in Figure 3 . Figure SEQ 3: PSS Analysis setup Since a switching regulator has fast changing time domain waveforms, the Shooting Newton [time domain] periodic steady-state engine was selected. If the Harmonic Balance engine is used, then a large number of tones would need to be selected in order to correctly represent the voltage at the output of the comparator and the switch output since these waveforms are nearly square waves. In this case, the stabilization time [tstab] is equal to the transient simulation time. In practice, a shorter stabilization time would be used to reduce simulation time. Allowing the circuit to settle to close to steady-state will help convergence. For this test example using a tstab of 2-3us should be sufficient. Figure 4: Buck-Down Converter periodic steady-state response The plots for periodic steady-state response show the switch drive signal, net015 [0-12V], the output of the Switch, Switcher Output [-0.8V-12V], the buck converter output after the LC tank, Regulated Output [4.995V-5.009V]. Plots of the transient and periodic steady-state response match if overlayed and the average output from transient analysis and periodic steady-state analysis are consistent, 5.002V. In the next append, we will look at performing periodic small signal analysis to analyze the converters performance. If you have any questions about this append or would like more information, please let me know! Arthur Schaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/VBHLDzW6UiA" height="1" width="1" alt=""/>noIn " Spectre RF by any other name ...", a non-RF application for Spectre RF's periodic steady-state analysis was introduced. An example of using periodic steady-state analysis [PSS] to simulate the dynamic performance: THD and SFDR, of a swiIn " Spectre RF by any other name ...", a non-RF application for Spectre RF's periodic steady-state analysis was introduced. An example of using periodic steady-state analysis [PSS] to simulate the dynamic performance: THD and SFDR, of a switched-current Digital-to-Analog Converter [DAC] was presented. In this append, we will look at using periodic steady-state analysis for another non-RF application, switching regulator simulation. Switching regulators are the core of switched-mode power supplies [SMPS] and are interesting because they are used in most power supplies, including the high efficiency power supplies required mobile applications. Let's begin by considering a simple switching regulator design, a buck-down converter for converting from 12V to 5V, shown in Figure 1 . The design is a voltage-mode, continuous conduction mode switching regulator. The control block: reference voltage generator, error amplifier and compensation, drives a pulse-width modulator: ramp generator, comparator, and switch. The output of the switch is filtered by an LC tank and feedback to the control block. The duty cycle of the pulse-width modulator determines the output voltage of the regulator. The inductor and capacitor non-idealities [self-resonance frequency, ESR, ...] are modeled but not shown. Finally an EMI filter has been included in the design. Figure 1: Buck-Down Converter schematic First, let's look at the dynamic response of the regulator. After settling the start-up transient, the regulator operates at the frequency of the ramp generator. When operating at steady-state, the dc level is 5.002V and there is ripple on the regulated output voltage, ~+/-7mV. The transient response of the regulator is shown in figure 2 . Figure 2: Buck-Down Converter transient response While transient analysis can be used to verify the overall performance of the circuit, it is difficult to analyze the circuit's performance in the time domain using transient analysis, for example, consider the challenge of trying to simulate the phase margin and gain margin of the control loop. Ideally we would like to be able to use simulation to improve the buck-down converter design in the same way that using ac, noise, stability analysis can be used for design of linear circuits. However, linear analysis can not be directly applied to switching regulator designs so we need to find a new methodology for analyzing the switching regulator. Since the switching regulator has a periodic steady-state, we will apply the periodic steady-state analysis technology in Spectre RF. In this case, a source is used to generate the ramp so driven periodic steady-state analysis is used. The complete setup for PSS analysis is shown in Figure 3 . Figure SEQ 3: PSS Analysis setup Since a switching regulator has fast changing time domain waveforms, the Shooting Newton [time domain] periodic steady-state engine was selected. If the Harmonic Balance engine is used, then a large number of tones would need to be selected in order to correctly represent the voltage at the output of the comparator and the switch output since these waveforms are nearly square waves. In this case, the stabilization time [tstab] is equal to the transient simulation time. In practice, a shorter stabilization time would be used to reduce simulation time. Allowing the circuit to settle to close to steady-state will help convergence. For this test example using a tstab of 2-3us should be sufficient. Figure 4: Buck-Down Converter periodic steady-state response The plots for periodic steady-state response show the switch drive signal, net015 [0-12V], the output of the Switch, Switcher Output [-0.8V-12V], the buck converter output after the LC tank, Regulated Output [4.995V-5.009V]. Plots of the transient and periodic steady-state response match if overlayed and the average output from transient analysis and periodic steady-state analysis are consistent, 5.002V. In the next append, we will look ahttps://community.cadence.com/cadence_blogs_8/b/rf/archive/2009/06/30/periodic-steady-state-analysis-for-dc-to-dc-convertersComment on Measuring Transistor fthttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/onmwQR3F-KY/measuring-transistor-ftWed, 24 Jun 2009 01:29:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1655Art3/cadence_blogs_8/b/rf/archive/2008/07/16/measuring-transistor-ft1) Save the collector and base currents 2) First calculare beta. In the calculator, divide ( collector current / base current), make sure you use the the ac currents for the calculation: i("/Q0/C" ?result "ac-ac") / i("/Q0/B" ?result "ac-ac") 3) Use the cross function to find the frequency when the current gain is 1 or you can convert the expression into dB20 and look for the zero crossing cross(dB20((i("/Q0/C" ?result "ac-ac") / i("/Q0/B" ?result "ac-ac"))) 0 1 "either" nil nil) Best Regards, Art Schaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/onmwQR3F-KY" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2008/07/16/measuring-transistor-ftComment on Simulating MOS Transistor fthttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/4Mr5R4Ap4SI/simulating-mos-transistor-ftTue, 28 Apr 2009 01:02:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:17Art3/cadence_blogs_8/b/rf/archive/2008/08/09/simulating-mos-transistor-ftHi, Sorry I should have posted the netlist sooner, it would have avoided a lot of confusion for everyone. Best Regards, Art Schaldenbrand simulator lang=spectre global 0 parameters ICE=100u VCE=5 // // these model files should be available in the samples directory // include "./models/NPNlower.scs" include "./models/cornerMos.scs" section=TNTP V0 (net014 0) vsource dc=VCE type=dc // MOSFET ft // NOTE: the element instance names have been changed // the default names are shown in the bjt section // IREFERENCE --> 0V voltage source // IFEEDBACK --> current-controlled, current source IIN (net014 net9) isource dc=ICE mag=1 type=dc IREFERENCE (net6 0) vsource dc=0 type=dc IFEEDBACK (net9 0) cccs gain=1.0 probe=IREFERENCE NM0 (net014 net9 net6 0) nmos24 w=24u l=1.5u m=10 // BJT ft I2 (net014 net025) isource dc=ICE mag=1 type=dc V1 (net012 0) vsource dc=0 type=dc F0 (net025 0) cccs gain=1.0 probe=IREF_BIPOLAR Q0 (net014 net025 net012 0) NPNlower ac ac start=1 stop=100G annotate=status save NM0:g NM0:d Q0:c Q0:b<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/4Mr5R4Ap4SI" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2008/08/09/simulating-mos-transistor-ftSpectre RF By Any Other Name ...http://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/d0rqtQ0NKGQ/spectre-rf-by-any-other-nameWed, 22 Apr 2009 10:00:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:16932Art3/cadence_blogs_8/b/rf/archive/2009/04/22/spectre-rf-by-any-other-name6It has been a while since I last appende d , hope you are well! It was a little bit difficult to come up with a subject to write about and then recently I was in a meeting where we were talking about transient noise analysis. A designer was discussing the issue of analyzing the noise of a Pipeline ADC as an example of how they use the transient noise. The conversation started me to wondering whether or not this might be a good application for Spectre RF . After all, Spectre RF PNOISE analysis can be used to analyze the noise of Sample and Hold in the Pipeline ADC. Since then I have been spent some time exploring how to use Spectre RF to analyze data conversion circuits. The experience has reminded me of the the versatility of Spectre RF's periodic steady-state and noise analysis in analyzing complex problems. Shown in Figure 1 is an example of the periodic steady-state results for an 8bit current output Digital-to-Analog Converter, DAC. The periodic steady-state analysis results can be used to measure the SFDR and THD for the DAC. Figure 1: PSS Results for an 8-Bit Switched Current DAC So back to the title, what is in a name? I have been using Spectre RF for more than 10 years and have often used Spectre "RF" for unusual applications, for example, analyzing switched-mode power supply designs. Yet this was first time I have seriously looked at Spectre RF for data converters. I too had fallen into the trap of thinking that Spectre "RF" is for "RF" circuits. In the next append, we will look further into simulating data converters with Spectre RF. In the meantime, it would be good to hear from you, have you ever used Spectre RF for non-RF applications? Art Shaldenbrand<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/d0rqtQ0NKGQ" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2009/04/22/spectre-rf-by-any-other-nameSimulating MOS Transistor fthttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/4Mr5R4Ap4SI/simulating-mos-transistor-ftSat, 09 Aug 2008 04:25:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:10665Art3/cadence_blogs_8/b/rf/archive/2008/08/09/simulating-mos-transistor-ft14One other question that you might ask is, this approach works for bipolars but what happens when you need to characterize a MOS transistor. Nothing changes, use the same testbench and measurements, see figure 1. In this testbench a MOS transistor is being compared to a bipolar transistor. Figure 1: MOS and BJT Comparison The simulation results are shown in Figure 2. The difference in the results is that the low frequency bipolar transistors current gain is limited by the base current, while the MOS transistor current gain is not limited. Note, in advanced node processes, MOS transistors do have significant gate leakage and the plot for the MOS transistor would look more like the plot for the bipolar transistor. Figure 2: Comparison of current gain So the same techniques that you would to characterize a bipolar transistor and also be applied to MOS transistor.<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/4Mr5R4Ap4SI" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2008/08/09/simulating-mos-transistor-ftMeasuring Transistor fthttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/onmwQR3F-KY/measuring-transistor-ftWed, 16 Jul 2008 10:30:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:10226Art3/cadence_blogs_8/b/rf/archive/2008/07/16/measuring-transistor-ft9So let’s consider a practical example of creating test benches and performing measurements, starting with how to characterize a transistor. A couple of questions to consider before starting are: What parameters do you want to measure? What types of test benches are required to measure these parameters? Let’s start by considering how to measure the ft of a transistor, ft is a standard figure of merit used by analog designers to evaluate a transistor’s performance. Later we will consider how to measure some other common transistor parameters fmax, Noise Figure, as well as, measuring device stability. First, let’s review the meaning of ft. It is defined as the unity gain frequency of a transistor’s short circuit current gain. The first point is that we need to measure the short circuit current gain so ideally the output terminal, collector [drain] of the transistor will be connected to a power supply. The next point is that we need to calculate the current gain of the transistor. For Virtuoso Analog Design Environment users, the Virtuoso Visualization and Analysis waveform calculator can be used to perform this measurement. To calculate ft, plot the current gain by dividing the collector [drain] current by the base [gate] current and then using the cross function to find the unity gain frequency. An example of calculating ft, is shown in Figure 1. Figure 1: Measuring Transistor f t When creating a simulation test bench the natural place to start is the actual measurement test bench. To measure ft, an RF network analyzer can be used to measure the s-parameters and then the s-parameters can be converted into h-parameters. By plotting the h21, the ft can be estimated by extrapolating the unity gain frequency of the h21. This approach works well in the lab because wideband shorts do not exist in the real world. So RF measurements need to be performed with input and output matching and a result s-parameters are the natural method for characterizing transistors. One issue when testing in the lab is the need for separate bias and RF sources. Typically these sources are isolated with a bias T. In place of a bias T, we will use an inductor [pass the bias voltage at dc] and a capacitor [pass the RF input at frequency]. Figure 2: Emulating the Network Analyzer Setup to Measure h21 Using the lab test bench introduces some complexity that is not required when performing the measurement in simulation. By taking advantage of the “ideal” nature simulation, the test bench can be simplified. In simulation, we can create a perfect short using a voltage source. The voltage source provides bias and acts as a short circuit replacing the output matching circuitry in the original test bench. The RF input has been replaced by a current source with ac magnitude of 1 so the current gain can be directly measured. The input bias is still controlled by setting a dc voltage, see Figure 3. This test bench works well when measuring ft for a single bias condition. However, it is difficult to sweep the bias current of the transistor as can be done in the lab with a bias generator. Figure 3: Enhanced Test bench with an Output Short The next enhancement is to replace the bias voltage source and resistor with a diode connected transistor and a current source to set the bias current of the device under test [DUT], see Figure 4. Using a diode connected transistor to generate the bias voltage allows the bias current to be easily controlled. The dc bias and the RF input are still isolated by the pseudo bias T. This change to the test bench allows a designer to characterize the effect of bias current on ft so the transistor can be operated at its maximum ft. Figure 4: Improved ft Testbench Another enhancement to the test bench would be to replace the inductor and the capacitor used in the pseudo bias-T, shown in Figure 5. Virtuoso Spectre circuit simulator provides users analysis-dependent switches that can be set to open and closed depending on the analysis to be performed. This allows the designer to use the same test bench to perform multiple tests, for example, NF, fmax, etc. Figure 5: Using analysis dependent switches The test bench I use to measure ft is even simpler, in which the bias network [diode, analysis dependent switches, and RF source] is replaced by an ideal current mirror. The current mirror provides feedback to stabilize the bias point. The current source that sets the bias current is also RF input source the bias T is eliminated. BTW, you might recognize this type of circuit, it is called a Wilson current mirror, shown in Figure 6. Figure 6: My ft Test bench To review the test bench development process, we started by replicating the test bench we used in the lab in simulation. Then the test bench was optimized by tuning it to take advantage of the “ideal” nature of a SPICE simulator . Along the way we made several improvements to the measurements process: Directly measured the ft, eliminating the need to generate the s-parameters and then calculate the h-parameters. Added the ability to sweep the bias current so plots of ft vs. Ic can be generated, see Figure 7. Figure 7: Plot of ft vs. Ic In closing, I hope that this example of creating a test bench and making measurements will be useful for you. Please let me know what you think. Best Regards, Art Schaldenbrand Related Resources: Circuit Simulation Spectre RF Option Spectre Accelerated Parallel Simulator<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/onmwQR3F-KY" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2008/07/16/measuring-transistor-ftSenrinotabihttp://feedproxy.google.com/~r/cadence/community/blogs/2892/~3/rkG7SCUYWeQ/senrinotabiSat, 12 Jul 2008 01:54:00 GMT75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:10133Art3/cadence_blogs_8/b/rf/archive/2008/07/11/senrinotabi3Greetings! My name is Art Schaldenbrand and I have been at Cadence for 12 years supporting the custom IC design tools in the Virtuoso platform. My interests tend to be as widely varied as the customers I work with, ranging from Wireless Design to CMOS Image Sensor design and Power Management design. One common theme that comes up when talking to customers about any aspect of design is the challenge of using simulation to understand their design, from creating testbenches to measuring circuit parameters. In subsequent appends, I would like to discuss these issues and share ideas with you about how to use simulation more effectively. - Art<img src="http://feeds.feedburner.com/~r/cadence/community/blogs/2892/~4/rkG7SCUYWeQ" height="1" width="1" alt=""/>https://community.cadence.com/cadence_blogs_8/b/rf/archive/2008/07/11/senrinotabinonadult