OpenAI, ChatGPT costs too much: the company plans to produce its own AI chips

From OpenAI we have been talking consistently for months about what is proposed in terms of generative artificial intelligence, namely ChatGPT and DALL-E. The latest rumors about the non-profit company led by Sam Altman, however, concern “other things”, and more precisely the possibility of make AI chips in-house to find a solution to the high costs and low availability of computing GPUs Nvidia.

AI chips are in short supply and costs are too high: what can OpenAI do

The news comes from Reutersaccording to which OpenAI is also evaluating potential acquisition targets. The company has not yet made a decision but discussions on the matter have been going on for a year. The solutions at the moment would be the following four:

  • Making AI chips in-house
  • Acquire a company that already makes AI processors
  • Diversify your suppliers
  • Partner more closely with a vendor, like Nvidia

The idea of ​​Sam Altman & co. it is a consequence of supply difficulties but above all of staggering figures that OpenAI pours every day to allow the gears of its products to move. To give you an idea, according to Dylan Patel (SemiAnalysis analyst) ChatGPT would cost around $700,000 a daytherefore over 250 million dollars per year.

If OpenAI is moving in this direction, it means that the processors to be used for artificial intelligence are a difficult commodity to find. At the moment, the sector is dominated by Nvidia, with a market share that even exceeds 80%. And Nvidia’s graphics processing units (over 10,000) are the basis of the Microsoft supercomputer which since 2020 has been used for the operation of ChatGPT. Speaking of the Redmond company, it is reportedly working on a chip AI custom which OpenAI is currently testing.

We are now just waiting for a decision from OpenAI, the definition of a strategy to address the double problem described above.


Please enter your comment!
Please enter your name here