
AI Grows Sharper, Cheaper, and Harder to Ignore | Image Source: venturebeat.com
STANFORD, California, April 8, 2025 – The Stanford Institute for Human-Centered Artificial Intelligence (HAI) released its AI 2025 index report, and its findings indicate a seismic change in the global landscape of artificial intelligence. Now in its eighth edition, the AI index is more than a reference, it is a goal at the heart of one of the most transformative technologies of our time. Covered with data, ideas and trends, the 2025 report reveals how AI has increased in power, reduced costs and integrated itself deeper into everyday life than ever before. But in the midst of great progress, there is a quieter subcurrent: the struggle to benefit from responsible, fair and meaningful IV in all sectors.
What’s changed? According to the report, almost everything. Last year, the AI footprint was triggered in industries, countries and use cases, and 78% of organizations now report a certain level of AI use, largely 55% by 2023. But adoption alone is not synonymous with impact. And if AI is everywhere, its true value, financial, ethical and social, continues to unfold in an unpredictable way.
How has AI become much more affordable?
Perhaps the most surprising statistics in the report refer to costs. The inference – the execution process of the IA models formed - has become cheap dirt. According to Stanford’s AI Index, operating a GPT-3.5 model costs $20 per million chips in November 2022. In October 2024, it fell to only $0.07 per million chips. It’s a 280-fold reduction in just 18 months. To put this in perspective, imagine if a $50,000 car could be purchased now for less than $200.
This price collapse is not a flu, it is the result of a triple convergence: better algorithms, more efficient hardware and smarter model architectures. Smaller models are no longer synonymous with lower performance. Thanks to improved engineering, a compact 2024 model can often match or exceed the performance of a giant 2022. As Nestor Maslej, research director of the AI index, pointed out in an interview with VisureBeat, “the cost of developing quality models, but no border, decreases”
For IT entrepreneurs, this means one thing: it’s time to review their recruitment strategy. With open weight models that fill the performance gap with the closed ones, only 1.7% difference compared to February 2025, companions now have viable and cheaper alternatives to traditional API subscriptions. This democratization opens doors to startups, universities and governments.
Do AI options pay?
The short answer: not always. Although 78% of organizations have integrated IA into at least one function, the returns vary considerably. The index showed that most firms reported only modest financial improvements. For example, only 47% of companies using generic AIs for corporate strategy or financing reported income increases, and most of these benefits were less than 5%.
So, what’s wrong? According to Maslej, the problem is not technology, but its application. We have little data on what separates organizations that obtain massive returns on a scale from those that do not. This vision gap becomes increasingly important as the IA moves from experimentation to execution.
For leaders: The integration of AI throughout the company can be tempting, but the smartest movement is a targeted approach. Start with departments and use cases where the ICR is clear, measurable and sustainable. Monitor performance with diligence and avoid the spread of too little effort in unproven areas.
What business functions do you see real results?
Draw the data, and a model emerges. Some commercial features get disproportionate AI rewards. According to AI 2025, cost savings are more evident in supply chain operations and services. Meanwhile, the increase in revenues focuses on the company’s strategy and financing.
Here is a quick snapshot:
- Supply chain optimization: 61% report cost savings.
- Corporate finance/strategy: 70% report revenue gains.
- Customer service & sales: Increasing productivity and client engagement.
A plausible reason? These functions deal with large-scale, repetitive or high-level decision-making in all areas where IA thrives. Whether you plan for demand, optimize prices, or reduce customer support, AI reduces human overload while improving speed and accuracy.
Action theme: First make investments in these high performance areas. Rapid gains in the supply chain and strategic funding can generate internal momentum and revenues for wider adoption.
Can IA really push the skills of the workforce?
One of the most man-centred ideas in the report is labour productivity. IA not only helps, but helps most places where help is needed. In several studies, low-skilled workers have benefited much more from artificial intelligence tools than their high-level counterparts. In terms of customer support, artificial intelligence support increased productivity by 34%, while high-level workers saw little change. Similar trends have been observed in software consulting and engineering.
Why is it important? Because AI does not repair as a job seeker, but as a job equalizer. When it unfolds with thoughts, AI can lift the ground without lowering the ceiling. It gives minor employees a leg up and even helps the field.
Q: Should organizations train all their staff in artificial intelligence tools? A: Yes, but with nuances. Focus initial training efforts on functions or staff levels where IA can produce large-scale productivity increases. Then scale.
Are we following the risks of IV?
Unfortunately, not really. The index paints a disturbing picture of fast maturing technology, exceeding our ability to govern it responsibly. While 66% of companies recognize cybersecurity as an IV risk, only 55% actively mitigates it. Similar errors exist with respect to compliance (63% awareness versus 38% mitigation) and IP concerns.
And the risks are no longer theoretical. AI incidents increased by 56.4% in 2024, reaching 233 documented cases. They range from biased algorithms and hallucinated outputs to dangerous information and confidentiality errors. The rules are catching up… The U.S. federal agencies presented 59 AI-related policies in 2024, more than double the previous year.
As Russell Wald, Managing Director of Stanford HAI, says: “IA is a technology that changes civilization (…). Last year, the adoption of AI accelerated at an unprecedented pace, and its scope and impact will only grow
Advice to Business: Build a accountable governance framework for AI, not only for compliance, but as a long-term strategic asset. Treat the security, bias, explanation and integrity of data as fundamental pillars of design, not after thoughts.
What is the global innovation landscape AI?
As the U.S. continues to lead the production of AI models, other countries are approaching. In 2024, the United States produced 40 remarkable models, compared to 15. But in quality parameters such as MMLU and HumanEeval, Chinese models are now almost indistinguishable from their American counterparts. In fact, the performance gap increased from 20 percentage points in 2023 to only 0.3% in 2024.
And it’s not just China. The Middle East, Latin America and South-East Asia are increasing, with countries such as Saudi Arabia launching massive artificial intelligence funds (product transfer to $100 billion) and India providing more than $1 billion. AI is becoming a truly global arms race, a struggle not only with dollars, but also with data, education and access.
In particular, 90% of outstanding IA models now come from industry, not academia, a spectacular investment in pre-2006 trends. At the same time, the academic world remains the main influential source of IA research, illustrating a growing gap between theory and production.
What about education and public perception?
AI and computer education are developing rapidly, but preparation remains uneven. Two thirds of countries now offer the education of the K-12 CS, a doubling of 2019. However, gaps remain, particularly in Africa and rural Latin America, where infrastructure problems such as lack of reliable electricity remain obstacles.
In the United States, 81% of K-12 teachers feel that AI should be part of basic learning, but less than half feel equipped to teach it. This disconnection highlights a bottle neck disturbing: how to prepare the next generation for an AI-centered world if our teachers are left behind?
Meanwhile, the public’s feeling about AI is changing, but deeply divided. Countries such as China (83%), Indonesia (80%) and Thailand (77%) consider IV to be more beneficial than harmful. However, optimism is much lower in the United States (39%) and Canada (40%). However, even in sceptical countries, feelings are getting warmer year after year.
All this suggests that while the technical evolution of AI is excessive, social, ethical and educational scaffolding still needs to be updated.
Final reflections: AI is no longer the future. It’s here, it’s evolutionary, and it’s becoming more and more affordable. But the question now is not whether you can use the IA, whether you use it properly, and whether you are prepared to assume the responsibilities that accompany it.