Sort Newest to Oldest
2026/4/16
As the importance of cooling systems has increased significantly, chip-level cooling emerging has also become a critical technology. This article focuses on advancements in TIM 2 materials and the development of gold-plated cold plates, explaining how the introduction of indium improves thermal efficiency while also creating chemical reaction challenges. Amid the trend, Taiwanese companies—such as Superior Plating Technology (8431.TW) —are well-positioned to enter the supply chain and capture new opportunities driven by AI server cooling upgrades.
# Stocks
# Taiwan
# Manufacturing Industry
2026/4/8
The optical communications industry is shifting from traditional telecom cycles to structural growth driven by AI data center demand. As AI clusters expand and high-performance computing requirements increase, key technologies such as 800G and 1.6T optical transceivers, silicon photonics, and co-packaged optics (CPO) are accelerating adoption, driving upgrades across the entire industry chain. With capital expenditures continuing to rise, optical communications is becoming a core pillar of AI infrastructure, though supply bottlenecks and geopolitical risks remain key factors to watch.
# USA
2026/4/2
Driven by demand for AI, cloud computing, and high-performance computing, the U.S. semiconductor industry is entering a new growth cycle. This article provides a comprehensive breakdown of the global market size, industry value chain, and key segments—including IP/EDA, IC design, equipment, and foundries—while incorporating the 2026 outlook and potential risks to help investors understand core industry trends and investment logic.
2026/3/25
At GTC 2026, NVIDIA presented a comprehensive blueprint for its transformation from a GPU supplier into an AI factory platform provider. Centered around the Vera Rubin system, the company integrates CUDA-X, its ecosystem, and the AI Factory architecture, while highlighting the growing importance of token economics and agentic AI in the inference era. As demand rises across data processing, inference architectures, and enterprise AI deployment, NVIDIA is expanding further through heterogeneous computing, the open-model Nemotron family, and the Omniverse simulation platform. The company is also extending its reach into physical AI and regional AI markets, outlining the future direction of AI infrastructure and industry development.
2026/3/23
As the industry advances toward the 2nm process node, WMCM has emerged as a key focus in advanced packaging. This article explores WMCM’s core technology principles, process architecture, and its differences compared with InFO and CoWoS. It also analyzes its advantages and limitations in high-density RDL, thermal performance, cost, and integration flexibility, and highlights key Taiwanese beneficiaries in the materials and equipment supply chain.
# Editor's Pick
2026/2/26
As Moore’s Law approaches its physical limits, the bottleneck in chip scaling is no longer confined to transistors—it increasingly lies in power delivery. Traditional front-side power delivery networks (FSPDN) ultimately constraining performance and power efficiency.To sustain progress in advanced process nodes, TSMC, Intel, and Samsung have each moved forward with backside power delivery network (BSPDN) technologies. This article analyzes the fundamentals of backside power delivery, introduces TSMC’s Super Power Rail (SPR), and compares it with Intel’s PowerVia and Samsung’s BSPDN solutions. It further explores two critical steps in implementing backside power delivery—wafer thinning and reclaimed wafers—and examines how these technological shifts could reshape the market and supply chain.
2026/2/5
As AI and HPC chip sizes continue to expand rapidly, advanced packaging is facing multiple challenges, including area utilization efficiency, warpage control, and cost structure. Although traditional CoWoS technology has established a critical position in the high-performance computing market, the physical limitations of circular wafers and ABF substrates are gradually becoming more apparent, making panel-level packaging an inevitable next step. This article explains the development background and technical characteristics of CoPoS packaging, while also comparing the differences between CoPoS and earlier CoWoS and CoWoP technologies. By adopting square glass panels as the core of its CoPoS architecture, CoPoS addresses the needs of larger and more highly integrated packages through higher unit throughput and improved structural stability, while also driving a new wave of growth momentum across equipment makers and the broader supply chain.
2026/2/3
After missing the surge in AI computing demand and facing setbacks in process-node execution, Intel is positioning its 18A node as a pivotal turning point to re-enter the advanced-manufacturing race. With core technologies such as RibbonFET (GAA) transistors, PowerVia backside power delivery, and High-NA EUV lithography, 18A not only carries hopes for a revival of Intel’s in-house processor roadmap, but is also viewed as a key bargaining chip for Intel Foundry Services to win tier-one customers. This article outlines the major technical elements and applications of the 18A process, compares Intel’s approach with TSMC and Samsung in the 2nm landscape, and assesses the challenges Intel may face going forward.
2026/1/8
At CES 2026, NVIDIA CEO unveiled the company’s latest AI strategy, highlighting six core themes shaping its roadmap: AI as the next foundational computing platform, the rapid adoption of open-source models driving long-term industry growth, agentic AI tools evolving into autonomous systems with rising demand, Physical AI bridging the digital and real worlds, Alpamayo as an explainable end-to-end autonomous driving solution, and robotics emerging as the next major wave in the AI industry.
2026/1/6
As the AI industry transitions from a phase of rapid infrastructure buildout to one of mature applications, the demand for computing power on the inference side is expected to become a key growth driver for the next stage of large-scale AI commercialization. In this context, the general-purpose flexibility of GPUs and the efficiency advantage of ASICs are likely to create direct competition in the inference market, which could impact NVIDIA’s future revenue growth and market share. This article introduces NVIDIA’s planned $20 billion “quasi-acquisition” of Groq’s LPU inference technology through a licensing deal. It analyzes the motivation behind the transaction, the anticipated outcomes post-acquisition, potential technical risks, the feasibility of execution as currently planned, and the possible market implications if the deal goes through.
# Fundamental Analysis
# News
# Macroeconomics
# Central Bank
# Macroeconomic Data