<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="https://community.cadence.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0"><channel><title>Cadence Blogs</title><link>https://community.cadence.com/search?q=type%3Ablog%20-%22Chinese%20blog%22%20-%22Japanese%20blog%22%20-%22Taiwanese%20blog%22</link><description></description><dc:language>en-US</dc:language><generator>Telligent Community 12</generator><language>en-us</language><itunes:explicit>no</itunes:explicit><itunes:subtitle>Search results for 'type:blog -"Chinese blog" -"Japanese blog" -"Taiwanese blog"'</itunes:subtitle><item><title>喜讯 | Cadence Palladium Z3 与 Protium X3 系统荣膺 2025 全球电子成就奖</title><link>https://community.cadence.com/cadence_blogs_8/b/ctzcn/posts/cadence-palladium-z3-protium-x3-2025</link><pubDate>Wed, 18 Mar 2026 16:59:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364038</guid><dc:creator>Yaoyao Wang</dc:creator><guid>/cadence_blogs_8/b/ctzcn/posts/cadence-palladium-z3-protium-x3-2025</guid><slash:comments>0</slash:comments><description>在全球电子设计加速演进的浪潮中，Cadence 楷登电子再度以卓越的创新实力赢得行业瞩目。 由全球电子技术领域知名媒体集团 ASPENCORE 举办的全球电子成就奖颁奖典礼于 2025 年 11 月 25 日 在深圳盛大举行。 Cadence 旗下的 Palladium Z3 硬件仿真系统与 Proti um X3 FPGA 原型验证系 统荣膺 2025 全球电子成就奖 （World Electronics Achievement Awards, WEAA）之“年度 EDA/IP/软件产品”奖项。 这一荣誉，是对 Cadence 长期深耕验证领域、引领设计加速创新的高度肯定；更是以“ 智能系统设计（Intelligent System Design ™ ） ”战略，推动产业实现持续突破的又一里程碑。 验证加速的新时代正在到来 在 AI、汽车电子、数据中心与高性能计算全面崛起的时代，SoC 设计规模正不断攀升。面对数十亿门级设计与多系统协同开发的挑战，传统验证手段早已难以满足研发节奏。 Cadence 深刻洞察客户需求，推出了全新一代 Palladium Z3 与 Protium X3 系统——一套面向未来的验证与原型平台，让硬件与软件得以并行开发，让创新从此加速落地。 这对“动力双剑”系统不仅延续了 Cadence 在硬件仿真和原型验证领域的技术积淀，更以全新架构定义了验证效率的新标准： 容量提升超 2 倍，支持从 1600 万门到 480 亿门的设计规模； 速度提升约 1.5 倍，实现更快的编译、更短的部署、更高的吞吐； 一体化迁移架构，可在仿真与原型间无缝切换； 模块化设计，灵活扩展以满足不同团队的并行验证需求。 它们的出现，标志着从“功能验证”迈向“系统级加速”的新阶段。 双平台协同：从仿真到原型，一脉贯通 Palladium Z3 与Protium X3 的协同，是 Cadence 技术哲学的最好体现——通过统一架构，实现从硬件仿真到软件验证的自然衔接。 在设计早期，工程师可借助 Palladium Z3 进行全系统功能验证，提前发现潜在设计问题；而在系统逐步完善后，可直接迁移至 Protium X3，用于软件调试与性能验证，实现“硬件未出片，软件先起跑”。 两套系统共享一致的编译前端与接口环境，支持虚拟接口与物理接口的自由切换，让设计、验证、软件开发三大环节形成真正的闭环协作。 新一代 Palladium Z3 与 Protium X3 系统在架构与功能上实现多重突破，它们不仅是验证工具，更是智能化的加速平台。 Palladium Z3 集成了多款面向特定领域的应用，包括行业首创 4 态硬件仿真，还原真实硅片行为；实数建模（RNM）支持混合信号验证；动态功耗分析（DPA）实现系统级低功耗优化。 采用 NVIDIA BlueField DPU 与 Quantum InfiniBand 技术，实现虚拟接口与物理接口的自由切换，在多系统协作场景下保持一致性与高吞吐。支持分布式编译与快速增量调试，工程师可在一天内完成多轮设计迭代，让复杂设计验证更加灵活与高效。 凭借这些技术创新，Palladium Z3 与 Protium X3 不仅提升了验证速度，更帮助客户构建真正的“数字孪生”设计环境——在虚拟世界中预见现实成果。 正因如此，从数据中心到汽车电子，从 AI 计算到移动终端，Cadence 的硬件加速平台正成为全球创新企业的信赖之选。Palladium Z3 与 Protium X3 系统自推出以来，已被全球众多领先企业采用。包括 AMD、Arm、NVIDIA 在内的合作伙伴，都在其开发流程中深度集成了 Cadence 的验证解决方案。 以智能系统设计战略，开启验证新纪元 Palladium Z3 与 Protium X3 不仅是验证平台的革新，更是 Cadence 智能系统设计战略的重要组成部分。通过软硬件协同、AI 驱动分析和系统级集成，Cadence 正帮助客户实现从“创意到实现（From Idea to Silicon and Beyond）”的全面提速。 此次荣膺 2025 全球电子成就奖，不仅是对 Palladium Z3 与 Protium X3 技术突破的权威认可，更是对楷登电子长期坚持创新、深耕客户价值的肯定。这份荣誉属于每一位信任我们的客户，属于每一位在创新道路上携手同行的伙伴。 在未来的征程中，Cadence 将继续携手全球合作伙伴，以先进的 EDA 技术和硬件加速平台，驱动智能系统设计的持续进化，为电子产业的发展注入不竭动力。</description></item><item><title>Cadence Tensilica Vision DSP 助力爱芯元智，提升人形机器人与物联网应用性能</title><link>https://community.cadence.com/cadence_blogs_8/b/ctzcn/posts/tensilica-vision-dsp</link><pubDate>Wed, 18 Mar 2026 16:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364025</guid><dc:creator>Yaoyao Wang</dc:creator><guid>/cadence_blogs_8/b/ctzcn/posts/tensilica-vision-dsp</guid><slash:comments>0</slash:comments><description>近日，楷登电子Cadence与边缘 SoC 领军企业爱芯元智共同宣布，爱芯元智在其最新的 AX8850N 平台上集成了 Cadence &amp;#174; Tensilica &amp;#174; Vision 230 DSP，以共同推动人形机器人、智慧城市与边缘应用的发展。此举标志着双方合作的一个重要里程碑，致力于为下一代智能设备提供高性能、低功耗的解决方案。 AX8850N 是爱芯元智专为人形机器人、智能摄像头、工业自动化等边缘应用打造的旗舰级 SoC。AX8850N SoC 集成了爱芯元智自主研发的72 TOPs NPU，以及两颗 Tensilica Vision 230 DSP。作为子系统的一部分，Vision 230 DSP 协助执行预处理与后处理任务，并执行无法映射到 NPU 的操作，作为协处理器，充当稳健的备用方案。此外，与前代 Vision DSP 相比，Vision 230 DSP 在架构层面显著增强，性能提升超过两倍，同时具备更高的可扩展性和定制化能力。 爱芯元智联合创始人兼副总裁刘建伟表示：“我们非常高兴与 Cadence Tensilica 携手，为客户带来前沿技术。Tensilica Vision 230 DSP 在我们的 AX8850N 平台上发挥着重要作用，在性能与效率方面带来了进一步的提升。此外，Vision 230 DSP 针对 SLAM 应用的增强支持与优化库，大大改善了人形机器人和自动驾驶车辆的导航性能，使 AX8850N 成为这类应用的理想平台。” Cadence 芯片解决方案事业部 Tensilica DSP 产品管理和营销总监 Amol Borkar 表示：“我们与爱芯元智的合作展现了先进的 Tensilica DSP 技术在新一代机器人和物联网 SoC 中的价值。Vision 230 DSP 凭借其高效架构，在深度学习和机器学习应用部署的关键环节，即预处理与后处理阶段，兼顾了高性能与低功耗。此外，Vision DSP 成熟的软件库可以加速算法移植，缩短产品上市周期，还可以保证现有代码的向后兼容。” 在近期举行的 2025 年嵌入式视觉峰会上，Cadence 展示了完全基于 Vision 230 DSP 运行的 SWIN Transformer。该演示在搭载 AX8850N SoC 的 Sipeed MaixBox M4N（爱芯派 Pro）开发板（ 链接 ）上实现。SWIN Transformer 是新一代深度学习任务的通用“核心架构”，此次演示充分彰显了 AX8850N 与 Vision DSP 在应对市场前沿趋势方面的能力。Tensilica DSP 也支持 Tensilica 指令扩展（TIE）语言，使供应商能够在处理器流水线中添加新指令，实现处理器的定制化。过去十年中，Cadence Tensilica Vision DSP 已广泛应用于移动设备、自动驾驶汽车、智能家居、工业物联网乃至人形机器人等多个领域，并取得了卓越成果。除 Vision DSP 外，Tensilica 产品系列还包括 HiFi DSP、LX 控制器及 NX 控制器，它们分别在语音/音频与微控制器领域展现了出色的性能。</description></item><item><title>快讯 | Cadence Conformal AI Studio 升级 AI 驱动的 SoC 逻辑验证流程</title><link>https://community.cadence.com/cadence_blogs_8/b/ctzcn/posts/cadence-conformal-ai-studio-ai-soc</link><pubDate>Wed, 18 Mar 2026 15:59:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364039</guid><dc:creator>Yaoyao Wang</dc:creator><guid>/cadence_blogs_8/b/ctzcn/posts/cadence-conformal-ai-studio-ai-soc</guid><slash:comments>0</slash:comments><description>Cadence 以 Conformal AI Studio 结合强化学习与分布式架构，全面升级 LEC、低功耗验证和 ECO，在 AI 设计时代开创新范式。 随着人工智能（AI）浪潮席卷半导体设计，验证技术正处于关键转折点。由 ASPENCORE 出版集团旗下《EE Times》与《EDN》联合主办的 EE Awards Asia，今年迎来第五届，持续表彰亚洲工程技术社群在电子设计与创新上的杰出贡献。 Cadence Conformal AI Studio 在本届 EE Awards Asia 上荣获“年度最佳 EDA 产品奖” ，这不仅体现了业界的高度肯定，更彰显了其在应对 SoC 设计复杂度不断攀升中的核心价值。 Cadence 研发副总裁李卓（Zhuo Li）日前接受《EE Times Asia》专访 ，分享 Cadence 最新平台如何引入新一代 AI 技术，全面升级逻辑等效检查（LEC）、低功耗签核（low power signoff）与工程变更指令（ECO）流程，为 SoC 验证开创新范式。 复杂度交织下的 “完美风暴” Cadence 研发副总裁李卓（Zhuo Li）在接受《EE Times Asia》专访时指出，过去十年 SoC 复杂度急剧攀升，传统验证方法已难以应对。他总结了三大结构性挑战：设计规模与电源域爆炸式增长、LEC 搜索空间指数扩张，以及 ECO 数量与频率显著提升，共同形成验证流程的“完美风暴”。 李卓强调，如今 SoC 电源域已从十余年前的个位数扩展至数十甚至上百个，设计规模更因激进的 PPA 目标与先进数据路径合成膨胀近百倍，部分 SoC 中数据路径逻辑占比已高达 70%。在此背景下，传统布尔引擎逐渐力不从心，而 ECO 已占整体设计周期 5–20%，对自动化与补丁优化提出更高要求。 李卓直言 ：“这正是 Cadence 推出全新 Conformal AI Studio 的根本原因，目标是从底层架构出发，解决传统等效验证的关键瓶颈。” Conformal AI Studio： 三大核心引擎重塑 IC 验证 面对验证复杂度的全面升级，Cadence 从系统层级重新思考验证架构，其核心策略建立在三大高度整合、相互呼应的 AI 引擎之上。 1 Conformal AI LEC — 分布式、AI 加速的逻辑等效验证 全新 LEC 引擎采用分布式架构与增强型数据路径推理，并引入强化学习，在庞大解空间中自动探索最佳路径，解决过去需专家手动调校的复杂案例。同时率先支持时序优化（sequential optimization），包括时序时钟门控与时序重置技术，使其成为首个可在 SoC 尺度下完整验证新一代 PPA 优化成果的平台。 2 Conformal AI Low Power — 扁平化、多线程、可扩展 在低功耗验证领域，Conformal AI Low Power 推出业界首个完全分布式引擎，可在十亿级门规模设计上进行扁平化分析，避免局部遗漏风险。结合数据驱动诊断，大幅加速根因分析。 3 Conformal AI ECO — 补丁缩减高达百倍 在 ECO 实作方面，Cadence 引入三项创新：RTL 层级差异比对、引擎内布尔合成优化，以及基于强化学习的补丁优化机制。三者结合使补丁大小平均缩减 10 倍，最高可达 100 倍，确保在严苛进度压力下，仍能实现可实施的 ECO。 全新验证典范 Cadence 研发副总裁李卓（Zhuo Li）指出， Conformal AI 已彻底突破传统「单次执行」的验证模式 。每次验证结果都会回馈至共享数据平台，支持趋势分析、HTML 仪表板呈现及持续自我调适的学习机制，从而不断优化整体模型表现。他强调，这是我们首个真正导入历史数据平台的产品，随着设计演进，机器学习模型也同步成长，为工程师带来显著且可量化的生产力提升。 从产业角度来看，AI 已成为 EDA 市场增长的关键引擎。根据 MarketsandMarkets 预测，全球 EDA 市场将从 2024 年的 115 亿美元增长至 2030 年的 183 亿美元，主要动能来自 AI 在设计流程的应用以及 SoC 复杂度的持续攀升。在此趋势下，半导体企业愈发依赖 AI 技术，以弥合工程资源与设计需求之间不断扩大的差距。 李卓强调，AI 并非取代人类专长，而是推动生产力跃升的关键催化剂。他指出，目前已有超过一半的半导体设计在某种形式上导入AI，从强化学习驱动的优化流程到到新兴的大型语言模型（LLM）驱动工作流智能代理， Cadence 将 AI 视为自动化引擎，也是帮助设计者聚焦系统架构与高阶决策的重要伙伴。 下一步发展 自 Conformal AI Studio 推出以来，市场与客户关注度迅速升温。Cadence 一方面持续精进核心引擎，另一方面积极布局下一波技术浪潮——用于辅助等效验证的 Agentic AI。李卓透露，内部已有多项研发项目推进，包括 LLM 驱动的智能代理，可为 Conformal 流程提供智能问答、自动化工作流指导，以及智能错误解析与调试辅助，并已取得令人振奋的初步成果。 生态协作： 成功的关键 李卓将 Conformal AI Studio 的成功归功于 Cadence 背后完整的生态合作，包括设计团队、晶圆代工伙伴与客户的紧密协作。他直言，在高度复杂的 SoC 环境下，没有任何 EDA 工具能单打独斗，唯有产业生态的深度合作才能打造真正成功的解决方案。 随着 Cadence 持续迈向全面 AI 化的设计未来，Conformal AI Studio 已不仅是一款产品，更是重要的里程碑。在 EE Awards Asia 2025 获得高度肯定的同时，它也宣告： 为兆级晶体管系统时代而生的新一代验证技术，已正式登场 。</description></item><item><title>Unifying Electronic and Photonic Circuit Simulation</title><link>https://community.cadence.com/cadence_blogs_8/b/cic/posts/unifying-electronic-and-photonic-circuit-simulation</link><pubDate>Tue, 17 Mar 2026 17:00:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364036</guid><dc:creator>Corporate</dc:creator><guid>/cadence_blogs_8/b/cic/posts/unifying-electronic-and-photonic-circuit-simulation</guid><slash:comments>0</slash:comments><description>The Need For Photonics The proliferation of artificial intelligence, the rollout of faster mobile networks, and the corresponding demand for vast data storage all require unprecedented processing power and data transfer capacity. To meet these bandwidth requirements, designers are pushing hardware to its absolute limits. However, electrical-only solutions are hitting a physical wall. Power consumption and heat generation pose critical constraints on system performance and operating costs, and the industry needs innovative ways to move data faster without exceeding power budgets. Photonic integrated circuits (PICs) provide a path forward. By using photons with electrons, PICs offer high-bandwidth, high-energy efficiency, and low cost by leveraging CMOS-compatible manufacturing processes. Designing these complex systems, however, requires advanced tools that can handle both the electrical integrated circuits (EICs) and PICs without creating bottlenecks in your workflow. Merging Two Worlds into One Engine Spectre Photonics extends the trusted Spectre Simulation platform into photonic circuit simulation. It directly addresses the increasing complexity of electronic-photonic co-design by merging electrical and photonic simulation into a single, cohesive engine. Key Features of Spectre Photonics By integrating photonic capabilities into the simulator, Spectre Photonics delivers a consistent simulation experience. Key advantages include: Versatile and Combined Circuit Simulation: You can simulate EICs or PICs alone or the combined EIC and PIC simultaneously. This single-engine approach ensures accurate results and reveals how different components interact across domains in real-world conditions. Proven Scalability and Performance: Because it builds on the existing Spectre Simulation platform, Spectre Photonics leverages the exact same performance, speed, and massive scalability that engineers already trust for complex electrical designs. Seamless Virtuoso Studio Integration: Spectre Photonics integrates tightly with the Virtuoso Studio environment. You can design, simulate, and analyze your photonic circuits using the exact same interfaces and workflows you use for standard IC design, drastically reducing the learning curve. Open Modeling Framework: The solution is based on open modeling frameworks such as Verilog-A and S-parameters. This gives you the flexibility to create custom models, adapt to specific foundry processes, and tailor the simulation to your exact project requirements. Advancing the Cadence Photonics Solution Spectre Photonics is a vital part of the full Cadence photonics solution. It empowers engineering teams to bring high-bandwidth, energy-efficient optical interconnects to market faster. Ready to streamline your electronic-photonic IC design? Explore the full capabilities of the Spectre Photonics Option and discover how unified simulation can accelerate your next major project.</description></item><item><title>Accelerating the AI Factory: Switch and Cadence Redefine High-Density Design</title><link>https://community.cadence.com/cadence_blogs_8/b/data-center/posts/accelerating-the-ai-factory-switch-and-cadence-redefine-high-density-design</link><pubDate>Tue, 17 Mar 2026 16:00:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364035</guid><dc:creator>Corporate</dc:creator><guid>/cadence_blogs_8/b/data-center/posts/accelerating-the-ai-factory-switch-and-cadence-redefine-high-density-design</guid><slash:comments>0</slash:comments><description>&amp;quot;We are redefining what is possible for next-gen AI factories with our patent-pending EVO Chamber solution—delivering up to 2MW per cabinet through advanced hybrid cooling in a modular, future-proof design. Using the Cadence Reality Digital Twin Platform, built with NVIDIA Omniverse libraries, Switch is developing a 5‑star DC Element—a physics‑accurate behavioral model and foundational building block for AI factory digital twins—of its EVO Chamber. This EVO Chamber Cadence Reality DC Elements model enables seamless design integration and rapid qualification of IT systems, including NVIDIA GB200 and GB300 NVL72 platforms, across a wide range of operating scenarios. Validation is performed using coupled physics‑based computational fluid dynamics (CFD) and flow network modeling (FNM) co‑simulation, allowing designs to be verified well before a single cabinet is installed.&amp;quot; - Zia Syed, Chief Technology Officer, Switch Introduction As AI factories evolve toward multi-megawatt rack densities, leading operators are turning to digital twins to design, validate, and maximize workload throughput before deployment. Switch is advancing this approach by using the Cadence Reality Digital Twin Platform, leveraging NVIDIA Omniverse libraries , to model AI factory environments in which IT equipment, power delivery, and cooling systems are developed and simulated together. This digital-first methodology now extends beyond system-level AI factory design into the physical infrastructure itself—most notably the patent-pending Switch EVO Chamber &amp;#174; , purpose-built to support extreme rack densities and next-generation NVIDIA GB200 NVL72 and NVIDIA GB300 NVL72 platforms. By expanding digital twin validation from the data hall to the cabinet and cooling architecture, Switch and Cadence are redefining how high-density AI infrastructure is designed, tested, and deployed. The Density Challenge Traditional data center designs were never built for the extreme requirements of next-generation generative AI. The sheer power density and variable power demands of modern GPU clusters that go beyond 100kW+ per rack, such as those powered by NVIDIA DGX SuperPOD , require a fundamental rethink of data center architectures, giving the rise of AI factories. Switch&amp;#39;s EVO Chamber is purpose-built for this new era. It enables AI factories to scale far beyond the power and cooling limits of traditional data halls, supporting the immense heat rejection requirements of liquid-to-chip and high‑performance air‑cooled systems. Through advanced hybrid cooling, each modular, future‑proof cabinet can support up to 2MW, unlocking unprecedented density while maintaining operational resilience. However, implementing such advanced infrastructure requires precise planning. Integrating these chambers into existing or greenfield sites introduces complex variables across airflow, fluid dynamics, power delivery, cabling, and spatial constraints. Simulating the Future of Infrastructure This collaboration empowers organizations to de-risk and maximize token throughput for their AI investments through advanced simulation. The new Cadence Reality DC Elements model of the EVO Chamber allows engineers to drag and drop them directly into any data hall layout within the Cadence Reality Digital Twin Platform. Once placed, these digital twins can be populated with specific cabinet payloads, including NVIDIA DGX GB200 and NVIDIA DGX GB300 systems . This is not a static model, but a physics‑based simulation of a real-world counterpart that predicts token‑production efficiency. By quantifying how power is consumed across compute and infrastructure, Cadence Reality Digital Twin Platform enables optimization of tokens per watt—maximizing power used for AI workloads within a fixed power envelope. This capability transforms facility planning from a static estimation exercise into a dynamic engineering discipline. Leaders can now answer critical questions with certainty: How will mixed cooling topologies interact within the same hall? Do we have sufficient operational headroom for peak workloads? Are the liquid-to-chip setpoints optimized for energy efficiency? Optimizing Cooling Architectures One of the most significant advantages of this integration is the ability to validate complex cooling strategies in software. The Switch EVO Chamber details five separate fluidic system pathways designed to support both liquid-to-air and liquid-to-chip cooling strategies. Aligning these pathways with modern direct-to-chip strategies for high-power GPUs is complex. Through the Cadence platform, operators can simulate and compare multiple cooling configurations side by side. This enables the optimization of performance, efficiency, and resiliency in a virtual environment, ensuring capital is deployed effectively and the physical facility is optimized from day one. Interoperability and the Open Ecosystem Time to first token is a competitive advantage in the AI market, and silos slow down innovation. Recognizing this, the collaboration emphasizes interoperability. Switch maintains OpenUSD interoperability across its internal design environment. This OpenUSD readiness enables a connected asset pipeline that links layout, visualization, and simulation across different teams and tools, including NVIDIA Omniverse libraries. It bridges the gap between facilities engineering, IT operations, and executive decision-making, providing a single source of truth for the entire organization. Conclusion: Validation Before Deployment In the high-stakes world of AI factories, and design flow inefficiencies are liabilities. The integration of Switch&amp;#39;s EVO Chamber into the Cadence Reality Digital Twin Platform provides a comprehensive toolkit for designing higher-performing AI factories faster. By enabling rigorous validation of placement, density, and cooling topology before deployment, Switch and Cadence are helping organizations build not just faster, but smarter. This is more than a modeling tool; it is a strategic asset for operational efficiency. It ensures that, as we scale into the multi-megawatt future, our infrastructure is as intelligent as the AI models it supports. Learn more about the Cadence Reality Digital Twin Platform , Digital Twins for AI Infrastructure , and Switch EVO Chamber .</description></item><item><title>Digital Twins Enable the Next Era of AI Infrastructure</title><link>https://community.cadence.com/cadence_blogs_8/b/data-center/posts/digital-twins-enable-the-next-era-of-ai-infrastructure</link><pubDate>Mon, 16 Mar 2026 20:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364027</guid><dc:creator>Corporate</dc:creator><guid>/cadence_blogs_8/b/data-center/posts/digital-twins-enable-the-next-era-of-ai-infrastructure</guid><slash:comments>0</slash:comments><description>Artificial intelligence (AI) is reshaping the data center. As AI workloads scale in size and complexity, traditional hyperscale designs are giving way to AI factories—purpose-built environments engineered to manufacture intelligence efficiently, reliably, and at scale. In an AI factory, infrastructure performance is no longer measured solely by availability or power efficiency. Instead, success is defined by tokens generated per watt, workload throughput, and the ability to rapidly deploy and operate next-generation accelerated computing platforms. Meeting these demands requires a new approach to infrastructure design and operations—one grounded in system-level understanding, workload awareness, and continuous optimization. This is where digital twins play a foundational role, and where Cadence is working closely with NVIDIA and industry leaders to enable the next generation of AI infrastructure. From Hyperscale Data Centers to AI Factories AI factories represent an evolution of the hyperscale model . While hyperscale data centers are designed to support a broad mix of workloads, AI factories are optimized specifically for AI training with an increasing emphasis on inference environments. These workloads introduce high rack-power densities of 100KW+ per rack, higher thermal loads, extreme AI workload variations, and critical sensitivity to network topology and latency in order to deliver the best performance per watt. As a result, infrastructure decisions can no longer be made in isolation. Power delivery, cooling architecture (such as liquid-cooling and hybrid-cooling), workload placement, and network design are deeply interconnected. Campus-level choices—such as building layout, cooling plant strategy, and fiber routing—directly affect AI performance, efficiency, and scalability. Designing AI factories, therefore, requires holistic, workload-specific optimization, validated before deployment, and continuously refined during operation. Cadence and NVIDIA: Enabling AI Factory Design at Scale As part of the effort to address these challenges, Cadence has collaborated with NVIDIA to deliver a Cadence Reality DC Elements model of NVIDIA&amp;#39;s latest high-performance accelerated computing platform, the NVIDIA GB300 NVL72 system , for the Cadence Reality Digital Twin Platform. This Cadence Reality DC Elements model is integrated with the NVIDIA Omniverse DSX Blueprint for AI factory digital twins and is available as a SimReady model for use via NVIDIA Omniverse libraries through the Cadence Reality DT Experience. SimReady assets provide use-case-specific technical payloads, from high-fidelity visualization and BOM data to lightweight behavioral models for rapid simulation. It enables data center designers and operators to accurately model, simulate, and optimize AI factory designs and changes during operations as part of performance-aware lifecycle management—reducing uncertainty, accelerating design cycles, and enabling confident decision-making. By providing validated, high-fidelity infrastructure models, Cadence helps organizations design AI factories that are ready for today&amp;#39;s workloads while remaining adaptable to future generations of accelerated computing. Digital Twins Across the Infrastructure Lifecycle Leading AI factory designers and operators rely on the Cadence Reality Digital Twin Platform to support both design and operational optimization. Unlike static planning tools, digital twins create a continuous feedback loop between design intent and real-world operation. Key capabilities include: Operational digital twins delivered by the Cadence Reality Digital Twin Platform. Visualization and cross-team collaboration of operational digital twins is enhanced by Cadence Reality DT Experience, powered by NVIDIA Omniverse. Cadence Reality DC Elements models now integrated in the NVIDIA Omniverse DSX Blueprint. AI surrogate models that accelerate simulation and optimization, allowing teams to explore more scenarios and tradeoffs much faster, typically in minutes. High ‑fidelity AI server and CDU models that speed end‑to‑end system design, shortening the time to deployment while optimizing AI factory architectures. Together, these capabilities enable engineers to move beyond conservative margins and instead operate infrastructure at validated optimal points, harmonizing performance, energy efficiency, and reliability to discover new revenue opportunities. Optimizing Infrastructure for AI Performance In AI factories, performance is measured by the number of tokens generated per second and efficiency by tokens per watt. The Cadence Reality Digital Twin Platform enables designers and operators to optimize infrastructure directly against these metrics by simulating real AI workload behavior across power, cooling, and networking domains. Using AI-accelerated simulations, teams can: Maximize token throughput by operating hardware at the most efficient tokens-per-watt point. Digital twin analysis validates running more GPUs at lower power (MaxQ), increasing token generation by up to 30% while improving overall energy efficiency (Q) and improving tokens per watt by 17% . Minimize cooling energy to unlock more power for compute. Optimized liquid cooling strategies, airflow, 45C inlet cooling temperatures, and thermal setpoints reduce cooling overhead and free additional power capacity for NVIDIA AI infrastructure. By enabling high-fidelity simulation of AI workload scenarios, these capabilities accelerate gigawatt-scale AI factory buildouts and help unlock billions of dollars in potential revenue by maximizing performance while controlling energy and infrastructure costs. These results highlight the importance of treating infrastructure as a single, integrated system, validated in a high-fidelity digital twin before deployment. Momentum Across the AI Ecosystem Digital twins are rapidly becoming foundational to AI factory design and operations across the ecosystem: NVIDIA is collaborating with Cadence on Reality DC Element models for next-generation Vera Rubin technology. NV5 is relying on the Cadence Reality DC Design software to create digital twins that future-proof AI data centers, optimize infrastructure efficiency, and reliability. These workflows have been applied at scale across environments powered by thousands of NVIDIA Grace Blackwell GPUs, such as NVIDIA GB200, where engineering-grade, CFD-driven simulation is critical to identifying and minimizing operational risks before deployment. Together, these examples reflect a growing industry consensus: Digital twins are essential infrastructure for AI factories. Engineering the Future of AI Infrastructure As AI continues to scale, the challenge is no longer simply delivering more compute—it is to efficiently convert power into intelligence. AI factories demand a new engineering mindset in which infrastructure is designed, optimized, and operated as an integrated system. With the Cadence Reality Digital Twin Platform, now integrated into the NVIDIA Omniverse DSX Blueprint, including DSX SimReady assets and AI-accelerated simulation capabilities, Cadence is helping customers design and operate AI factories with greater efficiency, speed, and confidence. The future of AI will be shaped not only by algorithms and silicon, but by infrastructure engineered to manufacture intelligence at scale. Learn more about Cadence&amp;#39;s partnership with NVIDIA and the Cadence Reality Digital Twin Platform . Read the related article: Accelerating the AI Factory: Switch and Cadence Redefine High-Density Design</description></item><item><title>The Engineering Workforce Multiplier: How Agentic AI Is Shaping Silicon Design</title><link>https://community.cadence.com/cadence_blogs_8/b/corporate-news/posts/the-engineering-workforce-multiplier-how-agentic-ai-is-shaping-silicon-design</link><pubDate>Mon, 16 Mar 2026 20:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364024</guid><dc:creator>Corporate</dc:creator><guid>/cadence_blogs_8/b/corporate-news/posts/the-engineering-workforce-multiplier-how-agentic-ai-is-shaping-silicon-design</guid><slash:comments>0</slash:comments><description>A virtual engineering organization coordinates reasoning and intent across design and verification, while accelerated, AI‑driven EDA tools—and working with NVIDIA—translate that intelligence into trusted, scalable silicon outcomes. Engineering demand is accelerating faster than human teams can scale—especially in chip design, where complexity is exploding, and expertise takes decades to build. Rather than replacing human expertise, agentic AI offers a new model: extending each engineer with a virtual engineering organization capable of operating at the scale and complexity modern chip design demands. Modern chip design and verification workflows are increasingly constrained by scale, complexity, and the availability of experienced engineers. As designs grow in size and interdependence, much of the engineering effort is consumed by repetitive yet critical tasks—such as translating specifications into RTL, constructing verification plans, running regressions, and diagnosing failures—slowing overall progress and increasing the risk of late-stage rework. Cadence addresses these challenges through an agentic AI workflow that coordinates reasoning, context, and execution across the silicon design flow. Introducing the Cadence ChipStack AI Super Agent The first implementation of this is in design and verification—the most time- and resource-consuming portion of the silicon design cycle. The Cadence ChipStack AI Super Agent applies agentic orchestration to front-end design and verification by coordinating multiple specialized AI agents. Achieving this level of autonomy requires more than prompt-based automation with large language models (LLMs). Our research shows that LLMs alone lack the deep, structured understanding needed to reliably produce high-quality RTL and verification testbenches. To address this limitation, the ChipStack AI Super Agent introduces a mental model— a structured knowledge representation that explicitly captures design intent, hierarchy, and relationships. By bridging human intent with machine reasoning, this shared mental model provides a persistent source of truth across agents, improving system-level understanding and enabling consistent generation of RTL and testbenches that engineers can trust. Built directly on this foundation, the ChipStack AI Super Agent deploys agents that operate in parallel across tasks such as RTL generation, testbench creation, regression orchestration, and debug. The shared mental model converts multi-modal specifications, design sources, and historical context into a unified knowledge base, enabling coordinated reasoning across the workflow and allowing teams to compress schedules while maintaining the engineering discipline and correctness required in chip design. The Cadence ChipStack AI Super Agent is in production deployment with multiple industry leaders, helping address the engineering productivity challenge and delivering the next generation of AI silicon. Collaborating for the Future of Agentic Chip Design: NVIDIA and Cadence Extending agentic AI beyond front-end digital workflows, Cadence is applying these principles to the distinct demands of custom, RF, and analog design. Analog design has traditionally been highly custom and manual, and analog design synthesis remains one of the biggest challenges the industry is trying to solve. To address this, Cadence and NVIDIA are collaborating on agentic workflows that can accelerate progress on this complex and critical problem. Because proprietary design-domain data resides with one company and flow execution and automation with the other, this effort requires agents from both organizations working together seamlessly. Cadence enables the development and deployment of autonomous, long-running agents using the NVIDIA Agent Toolkit, allowing developers and enterprises to create, manage, and govern autonomous agents that securely offload specialized, time-intensive processes. In doing so, organizations can scale operational expertise and unlock capabilities that would be impractical to achieve with human-only systems. &amp;quot;As semiconductor complexity continues to accelerate, AI has become essential to designing the next generation of chips,&amp;quot; said Timothy Costa, GM of Industrial and Computational Engineering, NVIDIA. &amp;quot;Our collaboration with Cadence, including innovations like the ChipStack AI Super Agent, demonstrates how combining intelligent reasoning capabilities such as mental models and automated formal test plan generation with NVIDIA accelerated computing can unlock new levels of productivity and efficiency for chip designers.&amp;quot; In these environments, the value of agentic workflows is not just autonomy, but control. Offloading specialized, time-intensive tasks to agentic workflows—while keeping the human engineer firmly in the loop and preserving fine-grained access control, auditability, and human oversight—allows teams to scale expertise without sacrificing intellectual property protection, accountability, or engineering rigor. As chip design complexity outpaces the ability of human teams to scale, agentic AI enables a new model in which each engineer is supported by a virtual engineering organization that coordinates reasoning, context, and intent across the design flow. AI-powered EDA tools then translate that intelligence into trusted design and verification outcomes. Through our collaboration with NVIDIA, this work demonstrates how these systems can be deployed securely and reliably across complex silicon design workflows. Learn more about agentic AI , the Cadence ChipStack AI Super Agent , and Cadence&amp;#39;s partnership with NVIDIA .</description></item><item><title>The Nexus of Passion and Profession</title><link>https://community.cadence.com/cadence_blogs_8/b/can/posts/the-nexus-of-passion-and-profession</link><pubDate>Fri, 13 Mar 2026 05:24:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364019</guid><dc:creator>BillieJ</dc:creator><guid>/cadence_blogs_8/b/can/posts/the-nexus-of-passion-and-profession</guid><slash:comments>0</slash:comments><description>Whether you&amp;#39;re choosing a college major or shifting direction in an established career, a compass might be found in the Purpose Venn Diagram. At the intersection of these questions lies Purpose. On a recent high school STEM Day at the Cadence Austin office, engineers discussed their own answers to these questions. Each unique story not only showcased how students might answer those questions, but also w here those circles can converge: Right there at Cadence. The Cadence Black Inclusion Group (BIG) partnered with the National Society of Black Engineers (NSBE) Austin chapter as a site host for Engineering Day 2026. Students from four high schools, including the Texas Empowerment Academy, Harmony Science Academy, Pflugerville High School, and Killeen ISD, got to hear from STEM professionals about their own passions, professions, and purpose. The Killeen students had to be on their bus by 5:30am to attend the event! Project coordinator Khalilah Shaw kicked off each session with a tour of the office space, break areas, and game room before gathering students for pictures and settling in for a quick overview of semiconductor design and verification. Technical director Anthony Williams brought energy and inspiration as he gave an overview of Electronic Design Automation (EDA). Amidst the technical content, Anthony shared highlights of his path from professional football to engineering. In his experience advancing from one team to another, from sports to tech, he underscored the value of teamwork and accountability. Solutions engineer LaMark Chance explained how silicon, which happens to be the chief element found in sand, is the premier semiconductor material used in integrated circuits (ICs). Billie Johnson, also a solutions engineer, used a remnant of an ancient silicon ingot from a 6-inch wafer process to illustrate how pure silicon crystals are formed into an IC wafer. A student asked if the industry is going to continue keeping up with Moore’s Law. Moore&amp;#39;s law isn&amp;#39;t so much a law of physics but rather an observation by Gordon Moore in 1965 when we was the director of research and development at Fairchild Semiconductor. He noted that the number of transistors in an IC doubled every two years and hypothesized that this trend would continue. Applications engineer Juwon Wharwood and design engineer Paul &amp;quot;Tayo&amp;quot; Adefiranye jumped right in discussing industry adaptations in silicon processing, fabrication technology, circuit design, packaging, software, and hardware development that all come into play with regard to how trends will follow Moore&amp;#39;s law or not. They were later joined by application engineer Kiara Chinchay-Diaz, and the trio discussed their draw toward engineering and paths to their current roles. During a break, Moubarak Jeje, liaison to the Austin chapter of the National Society of Black Engineers, asked LaMark more detailed technical questions about the Palladium processor-based hardware emulation platform for pre-silicon verification. The exchange of curiosity and wonder among these professionals rivaled that of the students&amp;#39; moments earlier. STEM days like this can ignite a spark or fan the flames of curiosity in the next generation of local tech talent. As volunteers shared stories of the work they love and the impact on the world, their smiles and tempo made it clear that they&amp;#39;ve arrived at their purpose today. Meaningful work, a healthy salary, and endless opportunities to learn illustrate Life at Cadence... at the nexus of passion and profession. Learn more about the Cadence Academic Network .</description></item><item><title>Reinventing Embedded Memory: How RAAAM Is Solving the SRAM Scaling Wall</title><link>https://community.cadence.com/cadence_blogs_8/b/corporate-news/posts/reinventing-embedded-memory-how-raaam-is-solving-the-sram-scaling-wall</link><pubDate>Thu, 12 Mar 2026 16:00:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364034</guid><dc:creator>Tanushri Shah</dc:creator><guid>/cadence_blogs_8/b/corporate-news/posts/reinventing-embedded-memory-how-raaam-is-solving-the-sram-scaling-wall</guid><slash:comments>0</slash:comments><description>As AI, automotive, and data centers continue to scale exponentially, one part of the chip has quietly become a bottleneck: embedded memory. Modern designs now dedicate more than half of their silicon area to SRAM, yet SRAM no longer scales with Moore&amp;#39;s law in advanced CMOS nodes. The result? Larger chips, higher power, and rising costs. RAAAM is a deep-tech startup spun out of Bar-Ilan University through the Cadence University Incubator Program. They’ve developed a completely new embedded memory architecture called GCRAM. This three-transistor cell delivers up to 50% area reduction and up to 10X lower power compared to a traditional six-transistor SRAM. What’s more, the GCRAM bit-cell utilizes decoupled write and read ports, providing native two-ported operation at no additional cost, offering a substantial increase in memory bandwidth. Reducing the memory footprint results in significant fabrication cost savings through die size reduction. GCRAM is a custom memory solution that RAAAM tailors to specific foundries and process nodes. They’re working with leading foundries and customers to bring the technology into next-generation products. GCRAM has been validated on the silicon of leading semiconductor foundries in process nodes ranging from 16nm to 180nm and was successfully evaluated in 5nm FinFET technology. Behind GCRAM’s performance is a rigorous design and verification pipeline powered by Cadence analog and mixed-signal technologies. RAAAM relies on the Virtuoso Studio environment to handle the entire design flow, from schematics to custom layout to simulation. The Spectre Simulation Platform is used extensively across multiple levels of complexity, with Spectre X Simulator for high-accuracy simulations, Spectre FX FastSPICE for large fully extracted GCRAM blocks, and Spectre FMC Analysis for fast statistical variation and yield analysis. To guarantee robust power delivery, RAAAM uses the Voltus-XFi Custom Power Integrity Solution for EM-IR signoff. Its tight integration with Virtuoso Studio, along with features like cross-probing and cross-highlighting, helps quickly pinpoint critical devices and potential reliability issues. The result is a faster, more intuitive debug process and high confidence in the final design. In an era where memory scaling and energy efficiency are more critical than ever, GCRAM offers a path forward that traditional SRAM just can’t match. Learn more about how RAAAM is reinventing on-chip memory with GCRAM using Cadence solutions . “Designed with Cadence” is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. For more Designed with Cadence videos, check out the Cadence website and YouTube channel .</description></item><item><title>Cadence at DesignCon 2026: AI-Driven Design from Booth to Best Paper</title><link>https://community.cadence.com/cadence_blogs_8/b/corporate-news/posts/cadence-at-designcon-2026-ai-driven-design-from-booth-to-best-paper</link><pubDate>Thu, 12 Mar 2026 14:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364023</guid><dc:creator>Veena Parthan</dc:creator><guid>/cadence_blogs_8/b/corporate-news/posts/cadence-at-designcon-2026-ai-driven-design-from-booth-to-best-paper</guid><slash:comments>0</slash:comments><description>When the industry&amp;#39;s toughest engineering questions meet their sharpest minds, you know you are at DesignCon! From February 24–26 at DesignCon in Santa Clara, the conversation centered on a clear reality: AI is redefining the limits of bandwidth, power delivery, thermal management, and system complexity. Engineers, researchers, and technology leaders gathered to discuss trends and confront the practical challenges shaping next-generation electronic design. Cadence stood out with a comprehensive presence across panel discussions, sponsored sessions, technical paper presentations, interactive booth demonstrations, and a networking event. From advancing agentic AI in electronic design to introducing forward-thinking methodologies in power integrity and system-level analysis, Cadence showcased solutions grounded in technical rigor and engineered for real-world impact. Panel Discussions: Where Vision Met Reality DesignCon&amp;#39;s panel discussions are where bold ideas are stress-tested, and this year, Cadence was right at the heart of two of the event&amp;#39;s most talked-about discussions. Agentic AI for Electronic Design Moderated by Charles Alpert , AI Fellow at Cadence, alongside Chris Cheng of HPE, this panel emerged as one of the most forward-looking conversations at DesignCon. Industry leaders examined how AI agents are moving beyond automation toward reasoning-driven engineering workflows. From domain knowledge ingestion to Python application programming interface (API) code generation and autonomous task execution, the discussion balanced enthusiasm with pragmatism. Panelists offered a candid assessment of adoption barriers, integration complexities, and the distinctions between agentic workflows and conventional design methodologies. The discussion highlighted a shared view: agentic AI is not positioned to replace engineers, but to serve as a productivity multiplier—one that requires deliberate strategy, governance, and thoughtful implementation to realize its full potential. Powering the Future: AI in Power Integrity If the first panel looked toward transformation, the Power Integrity discussion grounded the conversation in engineering realities. Industry experts, along with Yun Chase , Solutions Architect and Product Engineer from Cadence, tackled the complexities of modern power integrity (PI) design—from multi-layer PCB stackups to 32+ phase vertical power delivery. Rather than offering generic AI optimism, the panel focused on practical use cases such as decap optimization, voltage regulator module (VRM) placement, and feasibility analysis. Data scarcity, layout-driven constraints, and long simulation cycles were also discussed. The result was a refreshingly candid conversation about how AI can augment power integrity workflows. Sponsored Sessions: Tackling Real-World Complexity Cadence-sponsored sessions drew packed audiences eager to explore how simulation, AI, and advanced design methodologies are reshaping the industry. Simulation-to-Measurement Correlation in the AI Era Alfred (Al) Neves, Founder and CTO of Wild River Technology, delivered a compelling presentation on achieving strong simulation-to-measurement correlation in AI-driven signal integrity environments. With nearly four decades of experience, Alfred shared practical techniques spanning both time- and frequency-domain analysis. His emphasis on measurement-based modeling up to 110GHz resonated strongly with engineers navigating the realities of ultra-high-speed design validation. for the full presentation. System-Level Packaging for Next-Gen Silicon Mark Gerber , Product Management Group Director, IC Packaging, and Brad Griffin , Product Management Group Director, System Design &amp;amp; Analysis Group, at Cadence, addressed one of the most pressing challenges in modern electronics: the complexity of advanced IC packaging. Their session demonstrated how Allegro X Advanced Package Designer (APD) and AI-driven routing solutions streamline system-level design, reduce layout cycle times, and enable efficient 2.5D/3D integration. As packaging evolves toward heterogeneous integration and co-packaged optics, their insights highlighted the importance of integrated analysis within the design flow. for the full presentation. JESD204C Compliance in Practice Garrett Warren of Mercury Systems delivered a detailed case study on achieving JESD204C physical-layer compliance using Cadence Sigrity X System SI technology. By walking attendees through channel modeling, jitter injection, COM evaluation, and automated compliance checks, he demonstrated how simulation-driven workflows enable predictable signoff, even at 32 Gbps lane rates. for the full presentation. LPDDR6 for AI Data Centers Frank Ferro , Group Marketing Director at Cadence, provided a forward-looking look at LPDDR6 and its role in powering AI infrastructure. Building on LPDDR5X advancements, LPDDR6 introduces higher bandwidth, improved power efficiency, and critical reliability, availability, and serviceability (RAS) features tailored for data center reliability. As large language model (LLM) training workloads intensify, his session emphasized the importance of memory innovation to sustain system performance and enable next-generation AI infrastructure. for the full presentation. Optimizing AI Interconnects to 448Gb/s RaulStavoli, Senior Principal Signal and Power Integrity (SI/PI) Engineer at Rosenberger North America, presented scalable simulation workflows for optimizing AI interconnects approaching 448Gb/s. By integrating 3D EM modeling&amp;gt; , machine-learning-based solvers, and reusable &amp;quot;simulation tooling,&amp;quot; his session demonstrated how engineering teams can efficiently explore complex solution spaces while ensuring manufacturability. for the full presentation. Technical Paper Presentations: Innovation on Display DesignCon&amp;#39;s technical paper presentations are where deep research meets industry application, and Cadence contributions stood out. AI-Driven Thermal-Aware Data Center Capacity Planning Yixing Li , Senior Principal Software Engineer at Cadence, presented a breakthrough framework for thermal-aware capacity planning in AI-driven data centers . Li&amp;#39;s AI-driven framework predicts temperature distributions in milliseconds—achieving up to 10,000X speedup compared to high-fidelity computational fluid dynamics (CFD) simulations. Best Paper Finalist: Bridging the Time-Frequency Chasm in PDN Design One of the highlights of Cadence&amp;#39;s presence at DesignCon 2026 was the recognition of the paper &amp;quot;Bridging the Time-Frequency Chasm in PDN Design: Leveraging Cumulative Power-rail Noise and Reverse Pulse Techniques for Spatial-Frequency Insight&amp;quot; as a Best Paper Finalist . Presented by John Phillips and Kristoffer Skytte from Cadence alongside their industry collaborators Istvan Novak, Ethan Koether (Amazon), and Shirin Farrahi (Marvell), the presentation introduced a novel methodology combining Cumulative Power-rail Noise (CPN) and the Reverse Pulse Technique (RPT). Booth Demonstrations: Engineering in Action Beyond sessions and panel discussions, the Cadence booth served as a hands-on innovation hub. Live demonstrations illustrated how simulation and analysis technologies connect across chips, packages, and boards, bringing system-level design intelligence to increasingly complex architectures. Cadence Networking Event: No Illusions. Just Experts Building on the momentum from the technical sessions, the Cadence networking event, themed &amp;quot;No Illusions. Just Experts,&amp;quot; allowed industry peers to engage in technical conversations over food and drinks, featured curated giveaways, and even included a magician to draw attendees to the booth—creatively reinforcing that while the entertainment was magical, the engineering expertise was grounded in real-world results. A Cohesive Message: AI + Physics + Real-World Engineering Across every forum, booth, sponsored session, panel, and paper presentation, a consistent theme emerged: AI is most powerful when grounded in physics and real-world engineering constraints. Whether it was simulation-to-measurement correlation at 110GHz, agentic design flows, AI-enabled thermal planning, or advanced PDN methodologies, Cadence demonstrated a holistic approach. Rather than presenting isolated tools, the message centered on scalable workflows, cross-domain integration, and practical enablement for engineering teams to build next-generation systems. DesignCon 2026: From Insight to Impact DesignCon 2026 reinforced the accelerating convergence of AI, signal integrity, power integrity, thermal analysis, and system-level design. Cadence&amp;#39;s contributions, from moderating transformative panel discussions to earning the Best Paper Finalist recognition, reaffirm its position as a leader in this evolving domain. As bandwidth pushes toward 448Gb/s and AI workloads reshape infrastructure demands, one thing is clear: the future of electronic design will be defined not just by speed, but by intelligent, integrated workflows that combine simulation, AI, and deep domain expertise. And at DesignCon 2026, Cadence stood firmly at the center of that future! Ready to move beyond AI as a concept and into measurable impact? Learn how Cadence AI solutions are transforming design workflows from silicon to system.</description></item><item><title>Every Step a Story: What Trekking Taught Me About Short Steps!</title><link>https://community.cadence.com/cadence_blogs_8/b/di/posts/every-step-a-story-what-trekking-taught-me-about-short-steps</link><pubDate>Thu, 12 Mar 2026 05:06:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364033</guid><dc:creator>P Saisrinivas</dc:creator><guid>/cadence_blogs_8/b/di/posts/every-step-a-story-what-trekking-taught-me-about-short-steps</guid><slash:comments>0</slash:comments><description>I recently went on a trek to the largest monolithic hill in Asia, and it was my first ever trek in my life. When I started the trek, I was filled with excitement and confidence. The fresh air, beautiful mountains, and natural sounds energized me. I felt prepared both mentally and physically. Like many, I expected the journey to be challenging but manageable. However, seeing the large hill from below frightened me, but I chose to stay positive and moved slowly, carefully taking each step. There was a moment when I was unable to move forward due to the steep slope. I felt like giving up, but in the end, I discovered a stronger version of myself by taking tiny steps forward. The best part of trekking isn’t just the summit photo, but the journey—the small wins, surprises, climbs, and quiet moments of realization. After finishing the trip successfully, I returned with sore legs, a full gallery, and one surprising takeaway: &amp;quot;In trekking, careful steps aren’t enough; even tiny steps matter for success.&amp;quot; Why can&amp;#39;t we apply the same technique while learning the critical concepts? Can we learn any critical concepts in short videos? Yes, we have something similar to help you with YouTube Shorts, as mentioned below. These will help you learn the concepts quickly in seconds/minutes. In this 1-minute concept video , you will learn about the timing relationship between the data path and clock path in relation to setup slack, along with the mathematical expression for setup slack used in Static Timing Analysis (STA). In this 1-minute concept video , we will demonstrate what hold slack is and the mathematical expression of hold slack, including defining the data and clock paths between register-to-register. In this 1-minute concept video , we explain what clock skew is, why it occurs, and the different types—positive skew, negative skew, and zero skew, and their impact on setup and hold timing. In this 1-minute video , we explain the timing derate and the significance of late and early timing derates used in static timing analysis with OCV, and how STA tools utilize them to reflect real-world process and voltage variations. In this 1-minute video , we will describe Common Path Pessimism Removal (CPPR) in Static Timing Analysis: an overview of its purpose, how it affects slack by illustrating a timing path between reg to reg. In this 1-minute video , we will explain clock uncertainty in static timing analysis, including the factors used to calculate it, such as clock jitter, clock skew, and additional margin. We will also demonstrate clock uncertainty constraints in an SDC file with an example. To learn more about shorts, you can find many such YouTube Shorts/Videos under Customer Education – Shorts on various topics. With this YouTube Shorts , learning becomes effortless and accessible. Whether you’re traveling by bus, train, or cab—or just waiting in line—you can learn in a few seconds. No logins, no setup, no barriers. Just open YouTube, watch a Short, and gain knowledge instantly. &amp;quot;Cadence brings learning into everyday moments —quick, simple, and on the go.&amp;quot; You can subscribe to this channel for more information and daily videos/shorts updates: Cadence Design Systems YouTube Channel Want to Learn More? The  Cadence RTL-to-GDSII Flow  training is available as both  &amp;quot;Blended&amp;quot;  and  &amp;quot;Live&amp;quot; and free Online Training training. Please reach out to  Cadence Training  for further information — Watch this  video . Get ready for the most thrilling experience with  Accelerated Learning ! And don&amp;#39;t forget to obtain your  Digital Badge  after completing the training! If you would like to stay up-to-date with the latest news and information about Cadence trainings and webinars,  subscribe  to the Cadence Training emails. Related Blogs Mini Clips, Mini Videos, Mega Crave: Why Short Reels Dominate Screens AI’m Always With You While Working RTL-to-GDSII Backend Webinar: Couldn’t Make It? We Saved You a Front Row Seat Training Insights – Why Is RTL Translated into a Gate-Level Netlist? Did You Miss the RTL-to-GDSII Webinar? No Worries, the Recording Is Available! Clock Tree Synthesis (CTS): The Backbone of Physical Design</description></item><item><title>Powering the AI Supercycle: Design for AI and AI for Design</title><link>https://community.cadence.com/cadence_blogs_8/b/corporate-news/posts/powering-the-ai-supercycle-design-for-ai-and-ai-for-design</link><pubDate>Wed, 11 Mar 2026 14:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364031</guid><dc:creator>Reela Samuel</dc:creator><guid>/cadence_blogs_8/b/corporate-news/posts/powering-the-ai-supercycle-design-for-ai-and-ai-for-design</guid><slash:comments>0</slash:comments><description>At the IEEE International Solid-State Circuits Conference (ISSCC) 2026 in San Francisco, Anirudh Devgan, President and CEO of Cadence , outlined a defining shift for the semiconductor industry: artificial intelligence is transforming not only the systems we build, but the way we engineer them. The industry has entered an AI supercycle defined by unprecedented scale and complexity. Demand for intelligence is driving exponential growth in compute, pushing the limits of performance, power efficiency, and system integration. Industry forecasts project the semiconductor market to approach $1.2 trillion by 2030, and current momentum suggests the industry could reach the $1 trillion milestone as early as this year. But the most important shift is not just what we are building, it is how we design it. As highlighted at ISSCC, the industry’s next scaling challenge is no longer transistor density alone; it is engineering productivity. The Engineering Bottleneck of the AI Era While much of the attention has been focused on the compute requirements of training new frontier models, the growing importance of inference will drive new requirements across semiconductors and the AI infrastructure. These include distributed data centers in closer to population centers to reduce latency, increasing the capability and density of edge compute within energy budgets, and the ever-growing need for high-bandwidth and low-latency connectivity. These requirements are driving modern systems on chip (SoCs) to integrate hundreds of billions of transistors, advanced packaging, high-bandwidth memory, and heterogeneous compute architectures. Performance is now determined not just at the die level, but across packages, boards, racks, and full systems. The design space has become too large, too interconnected, and too dynamic for conventional automation alone. Development cycles are compressing from several years to one year or less, even as system complexity accelerates, increasing the cost of late-stage changes and missed power, performance, or area targets. Meeting the demands of the AI era requires a fundamentally new approach to engineering. Anirudh’s plenary highlights the potential for agentic AI to address these challenges. From Automation to Agentic Engineering For decades, electronic design automation (EDA) has delivered productivity gains through deterministic algorithms and point optimizations. But modern chip development is inherently iterative. Engineers explore architectures, refine constraints, evaluate tradeoffs, and repeat this process across multiple domains. Agentic AI moves engineering from isolated optimization to intelligent exploration. Instead of operating as standalone tools, AI-powered agents collaborate with engineers to understand design intent, learn from prior runs, and guide decisions toward optimal outcomes. By modeling relationships within high-dimensional design spaces, agentic systems help teams converge faster: reducing rework, shortening schedules, and enabling better architectural choices earlier in the cycle. Early use cases include specification-to-RTL generation, automated verification planning, and AI-driven optimization of power, performance, and area (PPA). This is not incremental automation. It represents a shift from tool-driven workflows to AI-assisted engineering collaboration. In many ways, this change marks the beginning of engineering intelligence at scale. Scaling with Multi-Agent Systems Modern development spans architecture, implementation, verification, physical optimization, and signoff—each with domain-specific constraints. The next evolution is a coordinated ecosystem of specialized agents working together across the design lifecycle. In this model, engineers define intent and constraints, while multiple domain-aware agents run, iterate, and continuously refine the design state. This multi-agent approach mirrors how engineering teams already work, now accelerated and scaled by AI that enables exploration with greater speed and consistency. The Engineering Mental Model Foundation models alone cannot produce engineering-grade silicon. Semiconductor development requires deep structural understanding, strict constraints, and deterministic behavior. A key enabler for engineering-grade agentic AI is a structured engineering mental model—a machine-readable representation of design intent that captures hierarchy, interfaces, protocols, connectivity, and system constraints. By grounding AI generation in this structured knowledge—and combining it with semiconductor-specific data—agentic systems move beyond code generation to true design reasoning. The mental model enables context-aware RTL and verification creation, consistency across the design hierarchy, and traceability from specification to implementation. This combination of foundation models, domain expertise, and structured design intelligence is emerging as a critical architectural differentiator for scalable agentic EDA. Early deployments are already demonstrating measurable gains in engineering productivity. By learning from prior designs and organizational data, agentic workflows help teams navigate complex design spaces more efficiently, accelerate convergence, and reduce dependence on manual iteration. As system complexity increases and time-to-market pressures intensify, faster convergence becomes a strategic advantage. Organizations that can evaluate more architectural options earlier, and reach optimal solutions with greater confidence, will deliver more differentiated silicon. This engineering transformation is unfolding alongside the rapid expansion of AI infrastructure. Beyond silicon design, AI is also reshaping the infrastructure that powers it. At the system level, digital twins and AI-driven analysis are improving the efficiency and utilization of large-scale AI deployments, with similar model-driven approaches expected to extend into domains such as robotics, autonomous systems, and other forms of physical AI. But the most immediate competitive differentiation will come from transforming how silicon itself is designed. Designing the Future, Faster The semiconductor industry has always advanced through higher levels of abstraction, smarter automation, and deeper integration. Agentic AI represents the next step in that evolution—transforming design from a sequential, tool-centric process into an intelligent, collaborative workflow. In the AI era, competitive advantage will be defined not only by what companies build, but by how intelligently they design it. The companies that embrace agentic engineering—grounded in domain expertise, structured design intelligence, and human–AI collaboration—will move faster, explore more, and deliver better systems. In a market accelerating toward the trillion-dollar milestone, the winners will not simply be those who build the most silicon—but those who can design intelligence into it the fastest. Explore how Cadence is powering the AI Supercycle across the semiconductor design lifecycle at Cadence.com .</description></item><item><title>Unlocking PPA with Innovus: What’s New and How to Unleash it</title><link>https://community.cadence.com/cadence_blogs_8/b/di/posts/unlocking-ppa-with-innovus-what-s-new-and-how-to-unleash-it</link><pubDate>Wed, 11 Mar 2026 04:12:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364029</guid><dc:creator>Vinod Khera</dc:creator><guid>/cadence_blogs_8/b/di/posts/unlocking-ppa-with-innovus-what-s-new-and-how-to-unleash-it</guid><slash:comments>0</slash:comments><description>Design teams building low-power silicon face nonstop PPA pressure: reduce dynamic and leakage power, hold or shrink area, and still meet timing on irregular floorplans. The latest Cadence Innovus Implementation System release turns that pressure into predictable wins with upgrades across placement (GigaPlace), optimization (GigaOpt), clocking (CCOpt), routing/closure (New PRO), and AI assistance (Innovus+ AI). At Cadence DSG Tech Day , Manoj Rai from Cadence showcased what has changed, why it matters, and how to apply it, so readers searching for &amp;quot;Innovus PPA improvements&amp;quot; or &amp;quot;what’s new in Innovus&amp;quot; can quickly find clear answers that map to real flow switches. This blog is an excerpt from that session. Placement that Starts with Power and Timing Realities (GigaPlace) The latest Innovus Implementation System release significantly enhances GigaPlace with multiple PPA-driven capabilities. Startpoint TNS Method . Beyond endpoint-only costing, GigaPlace also accounts for critical launch flops with worse Q-slack than D-slack by adding start point slack to the cost. Total Cost = ∑endpoint_WNS + ∑startpoint_WNS . This reweights timing force for unbalanced flops so launch–capture pairs converge earlier; be aware that a stronger timing force can move instances and shift the local WL distribution. Complementing this, the Unbalanced Path‑Based SKP , evaluates criticality on both sides of each flop and applies proportional timing weights across the entire critical path. It improves WNS/TNS through GlobalPlace/GlobalOpt. GigaPlace also introduces Advanced Pipeline Placement , which automatically collects pure F/F pipelines and balances stage spacing/point-to-point wirelengths; note. It eliminates skewed pipeline structures and produces smoother point-to-point wirelengths, yielding better data path symmetry and higher achievable frequencies. A major addition is Integrated Congestion‑Driven Placement (ICDP) , replacing earlier padding-based approaches. Padding mainly helps local traffic; ICDP relocates long-net sources/sinks out of hotspots, so through traffic over macros/blockages clears more reliably than with padding alone. Rounding out the improvements, Switching Power Placement (SPP) integrates activity-weighted wirelength directly into the placement cost function. It reduces the wirelengths of high-toggle nets, which helps lower overall switching power. This method is especially effective for designs with highly skewed activity profiles. Together, these upgrades make GigaPlace far more time-sensitive, congestion-aware, and power-intelligent, enabling substantial PPA gains across modern high-density designs. Optimization that Moves Instances, Not Just Numbers (GigaOpt) To reach the desired PPA, Innovus Implementation System offers many options, including mega options that include new path compaction and more, as detailed below: Mega options offer an easy/clear way to create a flow/recipe for much better PPA. Timing, power, and area effort can be improved at both LUI and CUI levels. Use the explicit knobs to set effort and ROI expectations: Optimization effort: LUI: setOptMode -opt_timing_effort CUI: set_db opt_timing_effort: standard/high Power effort: LUI: setOptMode –opt_power_effort CUI: set_db opt_power_effort: none/low/high/ultrahigh Area effort: LUI: setOptMode –opt_area_effort CUI: set_db opt_area_effort: standard/high New path compaction (local placement refinement) involves tuning the CPR solver by assigning weights to instances with a higher probability of movement, while minimizing side-path impact. Integrated instance movement transforms, combinational, and sequential path balance, and helps in better placement for critical timing paths. The improved path compaction performs: Better critical path exploration and working set creation. Weight-based prioritization of instances in the working set to guide CPR on movement. Improved core CPR engine for better cost computation for local refinement. Instead of end‑stage skewing, the Innovus Implementation System applies pervasive global skew throughout the flow to maximize useful skew, reduce upfront power, and create margin for power reclaim while minimizing timing churn. New hold optimizer improves hold TNS, area, and power, and provides an extra performance boost, automatically improving QoR. Its benefit is clear from the designs considered below: Power optimization, XOR‑tree gating disables clocks when data is stable; Data‑gating ANDs the D pin with ICG enable on high‑activity flops. Expect post‑CTS power reclaim to require timing recovery (GlobalOpt) to retain power gains. Clocking: Earlier Intelligence, Consistent Behavior, Lower Power (CCOpt) Clock Concurrent Optimization (CCopt), is a technology integrated into the Innovus Implementation System. Early Clock Flow (ECF) V2: ECFV2 improves upon the original ECF by addressing issues such as de-cloning after placement and a lack of timing awareness. It moves de-cloning before placement and adds physical awareness to better guide incremental placement. Additionally, ECFV2 enhances clock clustering and de-clustering for improved clock correlation. It also enhances activity-driven clock tree synthesis (CTS) by reducing wire length on high-activity nets and optimizing clock network resizing through iterative power-driven resizing. These advancements result in more efficient timing, power management, and overall design performance. Activity ‑D riven CTS V2 and Clock ‑G ate Push ‑U p Activity-driven CTS V2 plus clock‑gate push‑up cut activity‑weighted capacitance on the hottest parts of the clock tree: push the ICG upstream (logically and physically where allowed). This makes the highly toggling segment shorter and driven through fewer and lower-capacitance elements; even if a quieter branch becomes longer, the overall activity-weighted wire length decreases, and switching power improves. Activity-driven CTS V2 then resizes under an activity-weighted cost, accepting small total wirelength increases when they lower power in high-activity regions. Together, they deliver consistent clock‑power reductions with minimal side effects. H‑Tree New Synthesis Features: More Cell Choices, Less Power The latest Innovus Implementation System release introduces significant improvements to H-tree synthesis, focused on reducing power consumption and enhancing insertion delay in clock distribution networks. The new H-Tree features now support multiple trunk cells from the same cell family in the trunk section of the tree. These cells can have lower or equal drive strength compared to the original reference H-Tree, allowing the tool to create a more power-efficient trunk while maintaining timing integrity. Similarly, Innovus Implementation System now supports multiple options for final (leaf) cells, allowing designers to substitute the traditional buffer with an inverter when polarity constraints allow. This flexibility can reduce one tree level and improve overall insertion delay without compromising correctness. Best practice guidance: use multiple trunk cells from the same family with a drive equal to or lower than the reference trunk; allow an inverter at the leaf when polarity permits to remove one level. Example commands: create_flexible_tree –trunk_cell {INVX24 INVX16 INVX8} –final_cell BUFX24 create_flexible_tree –trunk_cell {INVX24 INVX16 INVX8} –final_cell {BUFX24 INVX24} Post‑Route Optimization: Reimagined (New PRO) New PRO features overhaul closure after routing by fixing the root causes of late-stage instability and then introducing a staged, timing-accurate optimization ladder. The legacy flow suffered from weak pre‑/post‑route correlation, coarse congestion/SI/topology estimation, a limited trackOpt transform set, and almost no router‑optimizer interplay, causing PPA loss, timing “jumps,” and ecoRoute DRC churn that stretched convergence. The new PRO addresses this by moving from soft wires and global routes with dRoute-level accuracy (eDR) to hard wires (final detailed routes), letting the optimizer try bigger, smarter changes while seeing near-final parasitics. Concretely, it runs through four stages: Init , reclaims inefficient buffer chains and poor layer assignments to relieve congestion and set a better topology. Soft , uses eDR (detail‑route‑level) global routes with SI‑based timing to apply large, non‑legal transforms safely Medium , narrows to moderate transforms. Hard/ECO , finishes with legal buffering/resizing only. This reduces post-route timing jumps, ecoRoute DRC churn, and improves pre/post correlation. Innovus+ AI: Engine‑Level Intelligence and Accumulated Learning Innovus AI brings learning and search directly into the implementation engines to improve PPA and predictability. GigaOpt‑integrated AI (`-ai`) evaluates transform candidates in parallel across place/clock/route/signoff and selects higher‑ROI moves with improved MT scalability versus baseline. Accumulated Learning (JedAI) carries forward cell selection and layer assignment experience to avoid unhelpful cells and discourage layers in problematic regions in subsequent run, thereby improving PPA and TAT across iterations. For usability and data science workflows, Innovus Implementation System embeds a Python UI so you can call Python AI libraries directly on the live database (and connect to JedAI ) to mine instances/pins, prototype heuristics, and operationalize learned policies inside the flow. What It Means in Practice On recent customer designs (25.12), -ai runs demonstrated reduced density and total/dynamic power at the cost of longer exploratory TAT, which typically shrinks as recipes stabilize. Design 1 density decreased by 1.8%, dynamic power improved by 2.7%, while total power declined by 5.8%, and TAT improved approximately 1.7 times. Similarly, Design 2 density decreased by 0.23%, dynamic power increased by 0.9%, total power decreased by 2.1%, and TAT improved about 1.4 times (exploration overhead typically shrinks as recipes stabilize). Learn More NTHU Makes a Human-Like Robot Arm with Stratus HLS and Innovus Implementation Innovus+ Synthesis and Implementation System GigaPlace Solver-Based Placement Technology In the Innovus Implementation System</description></item><item><title>Improved Efficiency: Longhorn Racing Transformed Analysis with Cadence Tools</title><link>https://community.cadence.com/cadence_blogs_8/b/can/posts/improved-efficiency-longhorn-racing-transformed-analysis-with-cadence-tools</link><pubDate>Tue, 10 Mar 2026 20:15:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364030</guid><dc:creator>Kira Jones</dc:creator><guid>/cadence_blogs_8/b/can/posts/improved-efficiency-longhorn-racing-transformed-analysis-with-cadence-tools</guid><slash:comments>0</slash:comments><description>Longhorn Racing, a cooperative student organization comprised of three Collegiate Design Series Teams that provide its members with the opportunity to explore different engineering fields and grow their tangible skills, is experiencing a significant shift in how they approach their engineering workflows. With professional-grade Cadence software tools, they are making enhancements to their analysis process. Let&amp;#39;s hear from the team about how they&amp;#39;ve incorporated Cadence into their flow. Enhanced Integration for Seamless Workflows One of the standout features of Cadence&amp;#39;s software is the deep level of integration offered across different workflows. Our team previously faced challenges when creating shell meshes for our tube frame due to the cumbersome interface of other industry tools. Many aspects, like shell types and variable mesh quality criteria, were hidden within a complex user interface. With Cadence, however, these finer details become more accessible, enabling our engineers to work more efficiently and intuitively. A Game Changer: The Connection Manager Another remarkable feature that has truly impressed our team is the Connection Manager in the ANSA preprocessor. Each time we introduce our structural engineers to this tool, we see their reactions of amazement at how straightforward defining and managing connections becomes. Particularly for multi-bolt patterns and structural welds, the specialized functionality provided by Cadence has proven to be indispensable. It&amp;#39;s a significant leap forward in simplifying what can often be a complicated process. Streamlined Post-Processing Interfaces Post-processing has often been a bottleneck in our analysis cycles, primarily because other software tools are tightly coupled with solvers and producing full result sets. Sharing results among team members has historically been time-consuming and bandwidth-intensive, but Cadence disrupts this outdated model, allowing us to share only relevant results in standalone and well-compressed formats. This change has led to a dramatic increase in our team&amp;#39;s innovation rate through streamlined analysis. Visible Improvements As we began to incorporate Cadence&amp;#39;s suite of tools throughout our design cycle this year, we quickly noticed improvements in analysis speed and team collaboration. Although we have only just scratched the surface of what Cadence offers, the benefits are already clear. As we move forward, the Longhorn Racing team is excited about the potential of further integrating Cadence&amp;#39;s workflows into our electronics, modeling, and aerodynamics teams. The enhancements we&amp;#39;ve seen so far have ignited our passion for innovation and excellence, pushing us to explore the full capabilities of Cadence&amp;#39;s tools. We hope to continue to support this team and these students in creating innovative solutions and gaining the practical experience they need to thrive in their future careers. If you are part of a student design team interested in partnering with Cadence, reach out to academicinnovation@cadence.com !</description></item><item><title>Serial Wire Debug (SWD) Protocol: Efficient Debug Interface for Arm-Based System</title><link>https://community.cadence.com/cadence_blogs_8/b/fv/posts/serial-wire-debug-swd-protocol-efficient-debug-interface-for-arm-based-system</link><pubDate>Fri, 06 Mar 2026 12:00:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364022</guid><dc:creator>Divya Chawla</dc:creator><guid>/cadence_blogs_8/b/fv/posts/serial-wire-debug-swd-protocol-efficient-debug-interface-for-arm-based-system</guid><slash:comments>0</slash:comments><description>Modern embedded systems are becoming increasingly compact, power efficient, and feature rich. As SoCs integrate more functionality, developers need reliable debug access without increasing pin count or board complexity. Serial Wire Debug (SWD) addresses these needs by providing a streamlined alternative to JTAG, enabling high performance debug features using only two pins, making it ideal for today’s constrained IoT, consumer, and automotive designs. Overview of the SWD Protocol The Serial Wire Debug (SWD) protocol is a compact, two-pin debug interface designed for Arm processor-based systems. As an alternative to the traditional JTAG interface, SWD provides efficient access to debug and trace features while minimizing pin count—a critical requirement for resource-constrained embedded and mobile devices. SWD is widely adopted in Arm Cortex-M processor-based systems and is defined in the Arm Debug Interface Architecture Specification. The protocol enables debuggers to communicate with the Debug Access Port (DAP), facilitating operations such as memory access, register reads/writes, and system control through a minimal two-wire interface. SWD Protocol Architecture The SWD interface operates using only two signals: SWCLK (Serial Wire Clock): Clock signal driven by the debug host (master) SWDIO (Serial Wire Data I/O): Bidirectional data line shared between host and target This simplified interface significantly reduces the pin overhead compared to JTAG&amp;#39;s five-pin requirement, making SWD ideal for space-constrained designs. Performance Advantages SWD can achieve higher performance than traditional JTAG interfaces. The protocol uses the full clock cycle for data transfer (rising edge to rising edge), whereas JTAG drives data on the falling edge and samples on the rising edge. This enables SWD to operate at up to twice the frequency of JTAG in the same technology, providing faster debug access and reduced development time. Key Protocol Features 1. The SWD protocol uses a point to point master–slave architecture where the debug host fully controls communication with a single target device via DP and AP registers. 2. Each transaction follows a defined sequence of request, acknowledge, and data phases, with LSB first transmission, explicit status responses (OK/WAIT/FAULT), and parity checking for reliability. Image Reference: Courtesy of ARM&amp;#174; Debug Interface Architecture Specification ADIv6.0 (Figure B4-1 SWD successful write operation) 3. A configurable turnaround period ensures safe bidirectional control of the SWDIO line, while defined state transitions and idle cycles manage operation modes, protocol switching, and low power states Protocol Error Handling The SWD protocol includes strong error handling through 1. Even parity checks on both request and data phases to detect transmission errors. 2. Protocol errors such as invalid start, parity, stop, or park bits result in no target response, allowing the debugger to recognize the fault. 3. A defined line reset sequence—holding SWDIO high for at least 50 clock cycles followed by idle cycles—ensures reliable recovery and re synchronization of the interface. Cadence SWD Verification IP Solution Cadence AMBA SWD Verification IP offers a full featured solution for verifying SWD interfaces at both unit and system levels, supporting active, passive, and low power (dormant) configurations. It provides exhaustive protocol compliance checking, including reset sequences, turnaround timing, parity, ACK responses, idle cycles, and state transitions across SWD, JTAG, and Dormant modes. Advanced capabilities such as timing configurability, glitch detection, error injection, and functional coverage are combined with easy UVM/SystemVerilog integration and flexible runtime control for efficient debug and verification. Cadence SWD VIP accelerates verification closure by providing: - Ready-to-use building block sequences for common debug operations - Automated protocol compliance checking - Extensive coverage models aligned with ADI v6.0 specification - Support for back-to-back VIP configurations for standalone testing Whether verifying a custom SWD implementation, integrating debug infrastructure into a SoC, or validating system-level debug scenarios, Cadence SWD VIP provides the comprehensive toolset needed for efficient, thorough verification. For any additional clarification or technical assistance, users are encouraged to reach out through talk_to_vip_expert@cadence.com For more information, please visit https://www.cadence.com/en_US/home/tools/system-design-and-verification/verification-ip/simulation-vip/amba.html#amba-swd</description></item><item><title>Transforming the Automotive Experience with Cadence Tensilica DSPs</title><link>https://community.cadence.com/cadence_blogs_8/b/ip/posts/transforming-the-automotive-experience-with-cadence-tensilica-dsps</link><pubDate>Fri, 06 Mar 2026 04:54:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364020</guid><dc:creator>SriramK</dc:creator><guid>/cadence_blogs_8/b/ip/posts/transforming-the-automotive-experience-with-cadence-tensilica-dsps</guid><slash:comments>0</slash:comments><description>Experience Innovation at Embedded World 2026 The automotive industry has shifted its focus from traditional performance metrics to prioritizing safety and comfort. As vehicles evolve into software-defined environments, the interior cabin is emerging as a sanctuary—offering enhanced safety, superior comfort, and immersive high-fidelity entertainment for all occupants. At this year&amp;#39;s embedded world , Cadence is proud to showcase how Tensilica DSPs are at the forefront of this transformation. In collaboration with leading ecosystem partners, Cadence will present a series of engaging demonstrations that highlight the latest in-cabin technology advancements . Featured Demonstrations The &amp;quot;quiet bubble&amp;quot; Active Noise Cancellation with Silentium: Road noise can significantly detract from a premium cabin experience. To address this, Cadence has partnered with Silentium to demonstrate the ir Quiet Bubble ™ software running on Tensilica HiFi DSPs. Leveraging low-latency processing, this system actively cancels unwanted road and tire noise in real time, delivering a serene and whisper-quiet interior. Passengers can enjoy clear conversations and a more focused driving experience, free from external distractions. In-cabin sensing and occupant monitoring on NXP RT700 platform: Tensilica DSPs are instrumental in enhancing the safety of drivers and passengers. This demonstration running on the NXP RT700 platform highlight s advanced in - cabin sensing capabilities and workloads processed on our HiFi 1 DSP core, including driver distraction detection , real-time monitoring of vital signs, and child presence detection to ensure no child or pet is inadvertently left behind. The importance of these technologies has grown as the Euro NCAP 2026 protocols now demand higher safety standards. Achieving a 5-star safety rating requires manufacturers to implement not only alert systems but also direct - sensing and active - intervention solutions. Long -r ange r adar c hipset from NXP S32R47 : Targeting L2+ to L4 automotive ADAS application s , Cadence Ten silica F loating P oint DSPs and FFT accelerators provide just the right solution under the hood to process radar- dense point cloud s , perform object detection, classify and separate tightly spaced objects, sense debris next to these object s, and detect vulnerable road users ( VRU s), enabling safer highway and urban driving. Cadence DSPs also support multimodal sensing and lidar sensor processing. Whether your interests lie in active acoustics, AI-driven safety features, or the future of zonal vehicle architecture, Cadence experts will be available to provide in-depth explanations and demonstrations of the latest technology. Where: Hall 4, Booth 219 When: March 10-12, 2026 For more information, visit Summary - embedded world 2026 and contact us to request a meeting .</description></item><item><title>Reinventing Optical Computing: Neurophos’ Breakthrough in Photonic Transistors</title><link>https://community.cadence.com/cadence_blogs_8/b/corporate-news/posts/reinventing-optical-computing-neurophos-breakthrough-in-photonic-transistors</link><pubDate>Thu, 05 Mar 2026 14:00:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364015</guid><dc:creator>Tanushri Shah</dc:creator><guid>/cadence_blogs_8/b/corporate-news/posts/reinventing-optical-computing-neurophos-breakthrough-in-photonic-transistors</guid><slash:comments>0</slash:comments><description>Optical computing has been around since the late 1980s. The idea of using light instead of electricity has always sounded promising, especially since photons are much more energy-efficient than electrons. When put to the test, though, optical transistors end up being too large to use on a chip. Enter Neurophos, a photonics startup that’s found a way to shrink the optical transistor down by 10,000X. Not only has the Neurophos team made optical transistors extremely small, they’ve also made them 100X higher in compute density and energy efficiency than today’s best GPUs. They’ve effectively compressed one entire rack into the size and power draw of a single GPU. Neurophos aims to deliver a commercially viable product by 2028. To achieve this breakthrough, the Neurophos team turned to Cadence to support its next-generation photonic chip design. By leveraging a suite of industry-leading tools, they were able to open doors previously closed to silicon photonics teams. With the help of Cadence’s Virtuoso Studio, Neurophos was able to design and verify complex photonic components, as well as handle the slotting and filling that’s needed. The Cadence EMX Planar 3D Solver is able to simulate large, multi-port layouts with high accuracy, capturing common-mode effects and intricate electromagnetic behaviors that traditional photonics struggle with. Neurophos also uses Cadence’s Spectre RF Option to simulate both the small-signal and large-signal responses of its devices. As with any high-performance IC, reliable power delivery is essential. Neurophos employs Cadence Quantus Extraction to extract, analyze, and optimize the power distribution network, ensuring first‑pass success even at aggressive performance targets. This level of insight is essential when pushing the boundaries of silicon photonics design, and together, Neurophos and Cadence are accelerating the future of computing. Learn more about how Neurophos is reinventing optical computing with the help of Cadence tools. “Designed with Cadence” is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. For more Designed with Cadence videos, check out the Cadence website and YouTube channel .</description></item><item><title>Professionals in CFD: A Conversation with Margarita Campos</title><link>https://community.cadence.com/cadence_blogs_8/b/cfd/posts/professionals-in-cfd-a-conversation-with-margarita-campos</link><pubDate>Thu, 05 Mar 2026 07:56:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364018</guid><dc:creator>Veena Parthan</dc:creator><guid>/cadence_blogs_8/b/cfd/posts/professionals-in-cfd-a-conversation-with-margarita-campos</guid><slash:comments>0</slash:comments><description>What does it take to go from a mechanical engineering classroom in Buenos Aires to validating different turbomachinery models at Cadence? For Margarita Campos, a turbomachinery product engineer at Cadence, it started with curiosity—and a master’s thesis that changed everything. In this edition of Professionals in CFD, Cadence marketing writer Veena Parthan speaks with Margarita to learn about her dual degrees, rotating detonation engines, her love for her work, and the surprising non-technical career she might have chosen instead. Veena Parthan: Tell us something about yourself. Margarita Campos: I’m originally from Buenos Aires, Argentina, where I completed my early education before entering mechanical engineering school. Unlike many other countries, in Argentina, the program is a comprehensive five-year degree that integrates both bachelor’s and master’s level studies. During my fourth year, I had the opportunity to enroll in a double-degree program with the Politecnico di Torino in Italy. This allowed me to spend two additional years studying in Italy and graduating with dual master’s degrees in mechanical engineering—one from Argentina and one from Italy. Though the process extended my studies, it was an incredibly enriching experience that proved invaluable for both personal growth and professional opportunities. Living abroad not only broadened my global perspective but also facilitated my ability to pursue career opportunities within Europe. Veena Parthan: That sounds like a fascinating academic journey across continents. Before stepping into a full-time role, did you have the chance to gain any industry experience through internships? Margarita Campos: Yes, I completed a short internship with Ethos Energy in Italy, a company specializing in the life extension of turbines and turbomachinery equipment. This stint provided me with exposure to industry practices and enhanced my understanding of workflows within a professional setting. Most of my prior experience was academic, including assisting in university courses, so this internship introduced me to a more established industrial environment and workplace culture. Veena Parthan: It’s interesting how those early experiences help shape career direction. So how did you first get into CFD? Margarita Campos: Interestingly, computational fluid dynamics (CFD) wasn’t a big part of my coursework. My program was more general mechanical engineering, or oriented to structural analysis. My interest in CFD developed during my master’s thesis. I worked with a PhD student researching rotating detonation engines and how their outflow could be integrated into an axial turbine. My research examined the impact of inlet end-wall diffusion on axial turbine secondary flows and overall performance. As part of the study, I conducted CFD simulations to analyze secondary flow structures and vortices and their influence on turbine efficiency. It was my first experience performing detailed simulation work beyond theoretical concepts, and it quickly became clear that CFD was an area I wanted to pursue further. Veena Parthan: That thesis clearly played an important role in shaping your interest in simulation. Today, what is your role at Cadence, and what does your day-to-day work look like? Margarita Campos: I’m part of the turbomachinery product engineering team at Cadence . My work is primarily focused on validating the Fidelity Flow Solver . My work involves running various cases, experimenting with different configurations, and assessing how changes affect results to validate new models as they are introduced. Additionally, I am involved in developing benchmarks and best practices, as well as offering second-line support for advanced customer concerns. I really enjoy that I get to work through the full workflow—meshing, setting up, and running the simulations, and analyzing results. It is a highly hands-on and exploratory role, which aligns perfectly with my interests. Veena Parthan: It sounds like a role that combines both technical depth and experimentation. Looking ahead, where do you see your career evolving in the future? Margarita Campos: At this point in my career, I feel deeply fulfilled by my work. My genuine passion for turbomachinery and computational fluid dynamics drives me, and I plan to continue building my expertise and contribution in this field for the foreseeable future. However, I remain open to exploring new career directions as I advance professionally. Roles that involve a stronger focus on customer interaction or design-oriented projects could be exciting prospects down the line. Being at an early stage in my professional journey, my priority is to continue learning, growing, and navigating toward a career path that best aligns with my evolving skills, interests, and aspirations. Veena Parthan: It’s always fascinating to see how careers evolve with new experiences. On a lighter note, if you had to choose a completely non-technical career, what would it be? Margarita Campos: That’s a tough one! But probably law. Opposed to engineering, which is very logical and black-and-white, law is more nuanced and human. Of course, I’m very happy with engineering—but if I had to pick something totally different, that might be it. Veena Parthan: That’s quite an interesting contrast. Outside of engineering and simulations, what activities do you enjoy most in your free time? Margarita Campos: I am an avid reader and have been a member of a book club since high school. The group is quite unique, as most of the members are literature professors or translators, while I bring a different perspective as an engineer. This diversity allows me to engage with individuals from various professional and cultural backgrounds. We typically select a book to read collectively and then engage in in-depth discussions. While our opinions on the books we read may vary—some we adore, others not as much—the conversations are consistently thought-provoking. I also have a strong passion for traveling. Having lived in Argentina, Italy, and now Belgium, travel has become an integral part of my lifestyle, offering me opportunities to immerse myself in diverse cultures and broaden my worldview. Veena Parthan: Those experiences definitely bring a broader perspective beyond engineering. Finally, what advice would you give students or fresh graduates who want to enter the CFD industry? Margarita Campos: My advice would be to maintain a mindset of continuous learning. CFD is a highly complex and dynamic field, so having strong foundational knowledge is essential. Never hesitate to ask questions or seek guidance when needed — curiosity and a willingness to learn are vital. Since this field is always advancing, it provides endless opportunities for growth, making it both challenging and highly rewarding. For those pursuing a career in mechanical engineering, I would emphasize the value of building a strong support network. Surround yourself with individuals who genuinely support and believe in your capabilities. Even if the representation of women in the field may be limited, mentors and colleagues who encourage your growth can make all the difference in your professional journey. Closing Thoughts From Buenos Aires classrooms to European research labs and now solver validation at Cadence, Margarita’s journey shows how curiosity and openness can shape a meaningful career in CFD. Whether she’s analyzing turbomachinery flows or debating books at her monthly book club, she brings the same thoughtful energy to everything she does. And for her, that continuous learning mindset is what matters most. “I’m still learning every day,” she says. “But I really enjoy what I’m doing, and that’s the most important part.” Read our previous conversations with CFD professionals who are shaping the future of simulation and leadership in technical fields.</description></item><item><title>A Seamless Cadence Solution for RF and Microwave Designers</title><link>https://community.cadence.com/cadence_blogs_8/b/cadence-support/posts/a-seamless-cadence-solution-for-rf-and-microwave-designers-484784278</link><pubDate>Thu, 05 Mar 2026 03:30:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364017</guid><dc:creator>ErinGrant</dc:creator><guid>/cadence_blogs_8/b/cadence-support/posts/a-seamless-cadence-solution-for-rf-and-microwave-designers-484784278</guid><slash:comments>0</slash:comments><description>Next generation wireless systems integrate heterogeneous technologies to meet demanding size, weight, performance and cost (SWaP-C) requirements. To meet these challenging SWaP-C specifications and ensure accurate simulation results for first-pass design success, engineers need to account for the parasitic behavior and signal degradation that occurs in designs operating at higher frequencies and with greater integration complexity. The Challenge With high-speed analog design, interconnects are treated as parasitics, elements that degrade performance, to be accounted for in post-layout simulation. Conversely, in modern RF design, interconnects are critical design elements that can be shaped, tuned, and optimized to enhance system performance. Where analog circuit design focuses mainly on simulating voltage and current waveforms, RF design is concerned with power waves and s-parameters to understand how energy is transmitted and reflected in the system. Because RF/microwave designs operate over a frequency band of interest, network simulations are carried out in the frequency domain rather than the time-domain simulations commonly used to analyze analog designs. Furthermore, RF performance metrics typically include network losses, reflections, and coupling over a wide bandwidth. The Cadence RF Solution The requirements for RF/microwave design above are addressed with specialized EDA tools supporting the unique design methodologies and analysis capabilities that differentiate RF from analog design. Developed from a common design framework, Cadence Microwave Office and the Virtuoso Studio RF support the design of III-V compound semiconductor devices such as Gallium Arsenide (GaAs) or Gallium Nitride (GaN) MMICs, RF PCBs, RF silicon ICs, and advanced module design. Each platform supports integrated planar EM (AXIEM in Microwave Office for GaAs/GaN/PCB design and EMX in Virtuoso Studio RF for RF silicon), 3D FEM (Clarity), and thermal (Celsius Thermal Solver) analysis. Learn More Join us for this free technical Training Webinar with Sanjeev Kumar, PhD, for an overview of how Cadence&amp;#39;s EDA solutions enable RF, microwave, and millimeter-wave designers to effectively manage these rigorous requirements, from initial system modeling through to final manufacturing signoff. Date and Time: Wednesday, March 25 07:00 – 08:00 PDT San Jose/10:00 – 11:00 EDT New York/14:00 – 15:00 GMT London/15:00 – 16:00 CET Berlin/16:00 – 17:00 IST Jerusalem/19:30 – 20:30 IST Bengaluru (Bangalore)/22:00 – 23:00) CST Beijing REGISTER NOW To register for this webinar, sign in with your Cadence ASK* account (email ID and password), then select Enroll. You’ll receive a confirmation email with all login details. A quick reminder: If you haven’t received a registration confirmation within one hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . To view our complete training offerings, visit the Cadence Training website. Want to Dive Deep Into the Topic? Enroll in our free online training course. Microwave Office for RF Designers Training Course | Cadence Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. Explore our Accelerated Learning option for faster skill-building Related Courses Planar EM Analysis in AWR Microwave Office Training Course | Cadence 3D EM Analysis with Clarity in Microwave Office Training Course | Cadence Clarity 3D Solver Training Course | Cadence Training Bytes Yield Analysis in Microwave Office (Video) Controlling the Mesh for AXIEM in AWR Microwave Office (Video) Blogs The Power of Clarity in Microwave Office: Transforming 3D EM Simulation - Analog/Custom Design - Cadence Blogs - Cadence Community Training Webinar - Microwave Office and EM Simulation - RF Engineering - Cadence Blogs - Cadence Community AWR Microwave Office to Allegro RF-PCB Design Flow - System, PCB, &amp;amp; Package Design - Cadence Blogs - Cadence Community *If you don’t have an ASK account, go to Cadence User Registration and complete the requested information.</description></item><item><title>Accelerating Chiplet Innovation with a New Partner Ecosystem</title><link>https://community.cadence.com/cadence_blogs_8/b/ip/posts/accelerating-chiplet-innovation-with-a-new-partner-ecosystem</link><pubDate>Wed, 04 Mar 2026 16:57:00 GMT</pubDate><guid isPermaLink="false">75bcbcf9-38a3-4e2e-b84b-26c8c46a9500:1364006</guid><dc:creator>Mick Posner</dc:creator><guid>/cadence_blogs_8/b/ip/posts/accelerating-chiplet-innovation-with-a-new-partner-ecosystem</guid><slash:comments>0</slash:comments><description>The semiconductor industry is currently undergoing a massive shift. As we push the boundaries of performance in physical AI, data centers, and high-performance computing (HPC), traditional monolithic chip design is hitting physical and economic walls. The answer for many engineers and architects is chiplets, a modular approach that enables the mixing and matching of silicon dies to create powerful, highly customized systems. However, transitioning from a single-die SoC (system on chip) to a multi-die SiP (system in package) brings a surge in engineering complexity. How do you ensure different pieces of silicon from different vendors communicate with each other correctly? To tackle these challenges head-on, Cadence has announced a major leap forward: a Chiplet Spec-to-Packaged Parts ecosystem . This initiative is designed to streamline the engineering process and accelerate time to market. Through our partnerships, Cadence is paving a lower-risk path for the next generation of chiplet adoption. The Spec-to-Packaged Parts Vision The core of this announcement is the Cadence Physical AI chiplet platform . This isn&amp;#39;t just a set of tools. It&amp;#39;s a comprehensive configurable platform designed to bridge the gap between a chiplet specification and a final, known-good die (KGD) or packaged (multiple dies) part. Cadence has built spec-driven automation that generates chiplet framework architectures. These frameworks combine Cadence&amp;#39;s own IP with third-party partner IP, all wrapped in critical chiplet management services, as well as built-in security and safety features. The goal is clear: Accelerate the spec-to-parts process while reducing risk! Developing chiplets often feels like venturing into uncharted territory. By providing a pre-verified platform, Cadence enables design teams to start with a robust foundation rather than building everything from scratch. In addition to significantly reducing customer-specific chiplet development time, this approach optimizes costs, provides the flexibility needed for customization, and enables configurability that modern applications demand. Figure 1. Cadence Physical AI Chiplet Platform Critically, the generated chiplet architectures are standards-compliant. They adhere to the Arm Chiplet System Architecture and the future OCP Foundational Chiplet System Architecture, ensuring broad interoperability. The Cadence Chiplet Framework encapsulates these capabilities, which are reusable across chiplets, accelerating chiplet development and ensuring cross-chiplet interoperability through standardization. Figure 2. Cadence Chiplet Framework Strategic Partners: Arm and Samsung Two key collaborations anchor this new ecosystem, signaling the industry-wide support for this initiative. Arm: Powering Physical AI Building on a long history of collaboration, Cadence and Arm have forged a new strategic partnership focused on physical AI. This agreement grants Cadence access to the advanced Arm Zena Compute Subsystem (CSS) . This is a game-changer for edge AI processing requirements in automobiles, robotics, and drones. By integrating Arm&amp;#39;s technology, the platform empowers safer, smarter, and more efficient systems. Samsung Foundry: Future Prototype Silicon Proof One of the biggest hurdles in chiplet adoption is proving real-world functionality. To showcase Cadence&amp;#39;s chiplet expertise and Samsung&amp;#39;s semiconductor technology, Cadence is partnering with Samsung Foundry to build a real-world silicon prototype of the Physical AI Chiplet Platform. Using Samsung&amp;#39;s SF5A process for automotive, the prototype will feature an Arm Zena CSS-based chiplet, a central system chiplet, and an AI chiplet powered by Cadence Neo NPUs. A Robust IP Partner and Silicon Analytics Ecosystem An ecosystem is defined by the strength of its community. Beyond Samsung and Arm, Cadence has enlisted a diverse group of initial IP partners and a silicon analytics company to ensure important aspects of chiplet designs for Physical AI are covered. Arteris : Providing physically-aware network-on-chip (NoC) IP products like Ncore for coherent systems and FlexGen for non-coherent ones to handle high bandwidth, low latency, and power-efficient interconnects in multi-die systems. eMemory : Contributing enhanced one-time programmable (OTP) memory that complements Cadence&amp;#39;s security subsystems, ensuring secure storage and key management. M31 Technology : Delivering MIPI PHY interface IP, essential for automotive and high-volume consumer applications requiring flexible camera and display integration. proteanTecs : Embedding a hardware health and performance monitoring system for telemetry and silicon analytics SW, per chiplet die and across chiplet types, to enable power-efficient, safe, and reliable performance of next-gen systems. Silicon Creations : Providing ultra-fast, multiphase PLL clocking solutions optimized for the Cadence Chiplet Framework, UCIe die-to-die IP, and interface IP. Trilinear Technologies : Delivering advanced DisplayPort IP to drive high-performance video connectivity. Conclusion As David Glasco, vice president of the Compute Solutions Group at Cadence, noted in our recent announcement , this ecosystem represents a &amp;quot;significant milestone in chiplet enablement.&amp;quot; In an era of skyrocketing design complexity, achieving the necessary performance and cost efficiency demands collaboration and standardization. By combining extensive internal expertise with a powerful network of partners like Arm and Samsung Foundry and specialized IP and silicon analytics providers, Cadence is building a launchpad for the next generation of physical AI and HPC innovations. For engineers and architects, this means less time wrestling with integration headaches and more time focusing on differentiation and innovation. Resources Read the news release: Cadence Launches Partner Ecosystem to Accelerate Chiplet Time to Market Watch our webinar: Cadence Chiplet Solutions: Helping You Realize Your Chiplet Ambitions . Learn more: Cadence Chiplet Solutions</description></item></channel></rss>