AI Intelligence for Builders and Investors
Track AI companies, startups, funding, and model releases in one place. Stay ahead with signal-first updates and personalized watchlists.
Signals today
5
Funding tracked
30
Models watched
5
AI terms
126+
Company Spotlight
Latest Signals
GLM-5 momentum +8%
zai-org • GLM-5
MiniMax-M2.5 momentum +18%
MiniMaxAI • MiniMax-M2.5
Qwen3.5-397B-A17B momentum +14%
Qwen • Qwen3.5-397B-A17B
Nanbeige4.1-3B momentum +11%
Nanbeige • Nanbeige4.1-3B
Nvidia to sell Meta millions of chips in multiyear deal
The Hindu • AI News
Process Safety Services Market Forecast for Robust Expansion to USD 6.22 Billion by 2032, Fueled by Risk Assessment Demand in Emerging Economies | Key Players -RRC Global, ABB, HIMA
Openpr.com • AI News
Gnani.ai launches multilingual speech model Vachana STT as India pushes sovereign AI
Business News India • AI News
US tech giant Nvidia announces India deals at AI summit
Columbia Gorge News • AI News
Your Watchlist
Sign in to build your personalized watchlist and get entity-level alerts.
Sign in to startDaily Intelligence Brief
Get the top AI market and product signals in one concise brief.
Get AI intelligence briefs
Receive high-signal updates on companies, funding, and models.
Funding Radar
View allxAI
$6BSeries C • Foundation Models
Databricks
$10BSeries J • AI Infrastructure
Perplexity
$500MSeries B • AI Applications
Physical Intelligence
$400MSeries A • Robotics
AI Education
Keep sharpening fundamentals with 15 lessons and 126+ terms.
Term of the day: World Model
AI Stock Pulse
Apple
NVIDIA
Amazon
Meta
Market data delayed. For informational purposes only.
Latest Research
View allEnsemble-size-dependence of deep-learning post-processing methods that minimize an (un)fair score: motivating examples and a proof-of-concept solution
Fair scores reward ensemble forecast members that behave like samples from the same distribution as the verifying observations. They are therefore an attractive choice as loss functions to train data-driven ensemble forecasts or post-processing methods when large training ensembles are either unavai...
arXivOperationalising the Superficial Alignment Hypothesis via Task Complexity
The superficial alignment hypothesis (SAH) posits that large language models learn most of their knowledge during pre-training, and that post-training merely surfaces this knowledge. The SAH, however, lacks a precise definition, which has led to (i) different and seemingly orthogonal arguments suppo...