• AI Rundown
  • Posts
  • The AI Rundown by Lightscape Partners - 2/17/26

The AI Rundown by Lightscape Partners - 2/17/26

Cisco’s 102.4 Tbps Silicon One targets 128K GPU fabrics, OpenAI starts testing ads in ChatGPT, Anthropic lands a $30B mega round at a $380B valuation

Good morning and welcome back to another edition of The AI Rundown by Lightscape Partners.

  • Cisco is swinging for the heart of AI networking with its 102.4 Tbps Silicon One G300, pitching fewer switches, higher radix, and better congestion control for hyperscale training clusters. If the utilization claims hold, networking becomes less of the invisible tax on GPU spending and more of a lever to shorten training time.

  • OpenAI is testing ads inside ChatGPT for signed in adult users on the Free and Go tiers in the U.S., signaling a real shift in the product’s business model. If conversational ads take, it could pull budget from search style channels, while forcing OpenAI to prove ads stay cleanly separated from answers and trust.

  • Anthropic’s reported $30B Series G at a $380B post money valuation is a statement that frontier AI is still a scale game, even at extreme prices. The round raises the performance bar for Claude and makes distribution, enterprise attach, and compute access the core questions investors will now pressure test.

Stay tuned as we explore these stories and their implications for the future of AI, technology, and innovation.

If you haven’t yet, please support the newsletter by subscribing!

Hardware

Cisco introduced its 102.4 Tbps Silicon One G300 switch chip to target AI cluster networking at hyperscale scale. Link.

  • The G300 packs 512 200Gbps SerDes and can aggregate links for up to 1.6Tbps port speeds today.

  • Cisco claims its congestion controls and shared packet buffer improve link utilization, cutting training times by up to 28%.

  • A large radix supports clusters up to 128,000 GPUs using about 750 switches, versus 2,500 previously overall.

  • The chip will power Cisco N9000 and 8000 systems, alongside new 1.6Tbps optics and 800Gbps LPO transceivers.

ByteDance is reportedly working with Samsung on an in house AI inference chip, aiming for 100,000 units in 2026. Link.

  • The report says ByteDance expects sample chips by end of March, with negotiations covering manufacturing and memory supply.

  • Sources describe the design as an inferencing focused chip, with eventual output targeted at 350,000 units annually.

  • ByteDance disputed the Reuters details as inaccurate, while Samsung declined to comment on the discussions publicly today.

  • The effort reflects Chinese firms seeking alternatives to Nvidia amid supply constraints and evolving export controls on GPUs.

Models

A Chile led coalition announced Latam GPT, an open source language model aimed at reflecting Latin American Spanish and culture. Link.

  • Developers said the model will be released publicly, with training data and evaluation tuned for regional language variation.

  • Project leaders described it as a way to reduce reliance on US and Chinese models, and address local biases.

  • The initiative is backed by universities and research groups across the region, with Chile’s CENIA coordinating development efforts.

  • Teams plan to make the system usable for education, government, and business applications in Spanish and Portuguese contexts.

Chinese AI firms are preparing a wave of lower cost models, as local champions push to narrow the gap with US rivals. Link.

  • The report says companies are emphasizing efficiency and price, targeting mass deployment across consumer apps and enterprise tools.

  • It notes rising domestic competition after DeepSeek, with multiple vendors racing to release models that run cheaper locally.

  • Hardware constraints and export controls are forcing more work on optimization, compression, and inference speed for Chinese deployments.

  • The trend could pressure global pricing as Chinese models expand internationally and challenge incumbents on cost performance tradeoffs.

Product Launches

Court filings suggest OpenAI and Jony Ive’s io team are targeting 2027 for an AI hardware device, with earlier prototypes underway. Link.

  • The reporting cites a trademark dispute that surfaced internal timelines, indicating a consumer device is planned but not imminent.

  • Details remain limited, yet filings point to iterative prototypes and a multi year development path before launch publicly.

  • OpenAI has described the effort as hardware plus software, aiming to create a new interface for interacting with AI.

  • If successful, the device would extend OpenAI beyond apps, putting it in direct competition with Apple and other platform owners.

Enterprise + Consumer AI Applications

Takeda partnered with Iambic Therapeutics in a deal worth up to $1.7B to use AI for small molecule drug discovery. Link.

  • The collaboration targets programs in oncology plus gastrointestinal and inflammation, combining Takeda’s priorities with Iambic’s AI platform.

  • Iambic will apply its discovery technologies and wet lab capabilities to design candidates, with Takeda responsible for later development.

  • Financial terms include an upfront component and milestone based payments that could total more than $1.7B overall.

  • The agreement highlights big pharma’s push to scale AI assisted chemistry, betting faster iteration can lift R&D productivity.

Uber Eats launched an AI powered Cart Assistant to help shoppers build grocery orders, ask questions, and get real time recommendations. Link.

  • The chatbot can add items to carts, suggest substitutes, and answer queries about products while users shop for groceries.

  • Uber positioned it as a way to reduce search friction, especially for long lists where customers often abandon mid order.

  • The feature ties into Uber’s broader push to embed AI across Eats and the core app, following prior personalization efforts.

  • If engagement rises, assistants like this could reshape retailer partnerships by influencing which brands surface first in recommendations.

Data Centers + Energy

Nebius unveiled plans for a 240MW AI optimized data center campus in Bethune, France, redeveloping a former Bridgestone site. Link.

  • Reports say the project will roll out in phases, with early capacity expected online later in 2026 as construction progresses.

  • At full build, 240MW would place the campus among Europe’s larger AI focused sites, reflecting rising sovereign compute ambitions.

  • Nebius is positioning the site to support GPU dense deployments, where power, cooling, and supply chain lead times dominate schedules.

  • The announcement signals continued infrastructure buildout in France, as developers chase grid access and local permitting for AI growth.

Startup Funding & Valuations

Anthropic announced a $30B Series G that values the company at $380B post money as demand for Claude continues to surge. Link.

  • Anthropic said the financing will support model research, safety work, and additional compute to scale training and deployment.

  • The round is one of the largest in AI, underscoring how capital intensive frontier model development has become.

  • A $380B post money price point raises the bar for revenue expectations, especially for enterprise subscriptions and platform partnerships.

  • Anthropic positioned the raise as enabling faster product iteration around Claude, with more reliable performance and broader deployment options.

Runway raised $315M at a reported $35B valuation to expand its AI video tools and training infrastructure. Link.

  • The round adds capital for model training, compute, and product development as generative video competition accelerates fast.

  • Runway’s tools focus on text to video generation and editing workflows, aiming to serve creators, studios, and brands.

  • The company framed funding as enabling higher quality outputs, longer clips, and more controllable motion and style controls.

  • Big valuations signal investor belief that video generation is moving from novelty to production use cases across media pipelines.

Benchmark is raising $225M in special funds to double down on Cerebras after the chip maker’s $1B raise at a $23B valuation. Link.

  • The new capital is structured as special purpose vehicles, letting Benchmark add exposure without reopening its main funds.

  • Cerebras sells wafer scale AI chips for training and inference, pitching fewer systems and simpler clusters than GPU based stacks.

  • TechCrunch reported Cerebras recently raised $1B, putting it among the most richly valued AI hardware startups globally today.

  • Benchmark’s move signals investor confidence that alternative accelerators can win workloads as demand for compute continues outpacing supply.

Didero raised $30M to build AI assistants that automate procurement and internal workflow tasks across large enterprises. Link.

  • The company targets repetitive back office steps like vendor onboarding, purchase requests, and approvals, where context is scattered across systems.

  • Didero said its assistants connect to existing tools, pulling policies and documents to generate compliant actions, not generic chat.

  • The $30M round funds engineering and go to market, aiming to expand deployments beyond early procurement focused customers.

  • If it works, workflow specific agents could become a new software layer, embedding automation inside procurement suites and ERPs.

Humanoid robotics startup Apptronik raised another $520M to scale Apollo robots and expand manufacturing for warehouse and industrial customers. Link.

  • The funding boosts a round backed by major investors, reflecting renewed interest in physical AI as warehouse labor remains tight.

  • Apptronik is developing the Apollo humanoid, aiming to handle repetitive tasks in logistics and factory settings alongside human workers.

  • The company plans to use proceeds for engineering, pilots, and production ramp, a key bottleneck for hardware heavy startups.

  • Large raises in robotics highlight investor appetite for automation that can directly replace labor hours, not just improve software workflows.

Vega, an AI driven cybersecurity startup, raised $120M led by Goldman Sachs Alternatives to expand its automated threat detection platform. Link.

  • Vega uses machine learning to correlate signals across endpoints and networks, aiming to reduce alert fatigue for security teams.

  • The company said new funding will support product development, customer expansion, and additional data sources for model training.

  • Goldman Sachs Alternatives led the round, signaling institutional interest in security vendors that can sell automation, not headcount.

  • As attackers adopt generative tools, defense platforms are racing to match speed, making AI centric operations a competitive differentiator.

An Albanian actor sued the country’s AI minister for defamation after a ChatGPT generated answer was cited in a public statement. Link.

  • The suit centers on claims that an AI generated response repeated damaging allegations, raising questions about accountability for automated outputs.

  • It highlights how officials may misuse chatbots as evidence, even when systems warn users about errors and hallucinations.

  • Legal experts expect more disputes over defamation and privacy as generative tools become embedded in public sector communications.

  • The case could influence regional guidance on AI governance, pushing for clearer standards on verification before publishing AI assisted claims.

Safety + Ethics

The Pentagon is pressing AI companies to expand generative systems onto classified networks, aiming to speed analysis while managing security risks. Link.

  • Defense officials want models that can run inside secure environments, reducing reliance on commercial clouds and limiting data exposure.

  • Companies have been cautious, citing fears that tools could be used for lethal targeting or other high risk military applications.

  • The report says discussions include deployment on classified systems, plus policy controls and auditability to satisfy governance requirements.

  • If resolved, the shift could open a large new market for frontier models, but it intensifies oversight and ethical scrutiny.

OpenAI

OpenAI began testing ads in ChatGPT for signed in adult users on the free and Go tiers, starting in the United States. Link.

  • OpenAI said ads will appear for some queries, with no ads for Plus, Pro, or enterprise subscribers yet.

  • The company framed advertising as a way to expand affordable access, while promising separation between ads and model responses.

  • Marketers are watching closely because conversational ads could shift search budgets, especially for high intent queries like travel and shopping.

  • The test signals a major business model change for ChatGPT, adding monetization pressure alongside ongoing compute and safety costs.

OpenAI introduced GPT-5.3 Codex Spark, a model tuned for programming tasks and agent style development workflows. Link.

  • OpenAI says Codex Spark prioritizes code generation, refactoring, and debugging, with improved reliability on complex, multi file changes.

  • The release is positioned as part of the Codex stack, designed to work with tools, tests, and repositories.

  • Spark is intended for faster iteration, letting developers ask for targeted edits instead of re writing entire modules.

  • OpenAI framed the update as a step toward more dependable coding agents, reducing review overhead and integration risk.

Thank you for reading the AI Rundown by Lightscape Partners. Please send any questions, comments, or suggestions to [email protected].