Consumers are paying for AI
Nvidia anticipates a fiscal second quarter revenue of $11 billion, marking a 64% year-over-year growth. This is astonishing, considering the scale of the company.
Such growth comes from a broad spectrum of sources, ranging from individuals to publicly-traded companies. They are training and fine-tuning models, while consumers utilise these models extensively.
Running these large language models (LLMs) is very costly. ChatGPT, for instance, costs about $0.002 per 1,000 tokens, while GPT-4 8k costs $0.06 per 1,000 tokens. Let's say you wish to generate a one-page document (~500 words), with a prompt length of around 100 words. If about 1,000 users were to do this, it would cost $48. This figure doesn't even account for any iterations. In today's world of zero marginal cost—where the cost of access to information reduces as it scales—this is an exorbitant price.
It requires companies to think about how to monetise their product from the beginning as consumers have grown accustomed to free information and product trials. However, this trend has shifted drastically due to the tangible cost of AI.
For example, OpenAI charges $20 per month for ChatGPT and anticipates revenues of $200 million in 2023 and $1 billion by 2024. Duolingo, charges $30 per month for their AI tutor and PhotoAI charges $29 for take photographs with AI.
This shift has several implications for startups and larger companies:
Bootstrapping AI companies without paying customers will become significantly more challenging.
Startups may offset costs through investor funding, but they will need to identify and implement monetization strategies quickly.
Cost control through optimisation and preventative measures against abuse will become mandatory.
Smaller and local LLMs will be the obvious path to reduce cost
Companies like Google and Microsoft are incorporating AI features into their existing product lines without increasing prices. This strategy is likely to affect their profits per customer.