Elon Musk secures $6 billion in funding for Grok artificial intelligence startup to purchase Nvidia chips

share
Elon Musk secures $6 billion in funding for Grok artificial intelligence startup to purchase Nvidia chips

Elon Musk, the founder and former CEO of Tesla, who is also the first DOGE Chief, is backing xAI with a market valuation of $500 billion to raise $6 billion. According to CNBC, the funds are expected to arrive next week, with $5 billion coming from Middle East sovereign wealth funds and $1 billion from other investors, to purchase 100,000 Nvidia chips.

Musk's xAI artificial intelligence startup has chosen Memphis, Tennessee, to build a supercomputer named Colossus, where Nvidia's AI chips play a crucial role. The facility currently consists of 100,000 Nvidia H100 GPUs, each costing around $30,000. Musk aims to continue using Nvidia chips and upgrade the facility with Nvidia's H200 GPUs, which can expand memory capacity but cost close to $40,000 each. Musk plans to purchase an additional 300,000 Nvidia Blackwell B200 GPUs next summer.

In November last year, xAI launched the Grok chatbot, preparing to compete in the AI market with Sam Altman's ChatGPT. Musk was an early angel investor in Open AI but left due to conflicts with Sam Altman, now becoming Altman's AI competitor.

About xAI Grok

Grok is an artificial intelligence developed by xAI that provides useful and genuine answers to various questions. The concept of Grok is inspired by the Hitchhiker’s Guide to the Galaxy and J.A.R.V.I.S. artificial intelligence system created by Tony Stark in Iron Man, aiming to help users understand the science of the universe and answer any questions. According to Grok itself, it will provide information to users honestly and without judgment, focusing on understanding and detailed explanations, and even approaching human problems in a humorous or external perspective.

How xAI Uses Nvidia Chips to Build a Supercomputer

xAI utilizes the H100 GPU chip to build the supercomputer Colossus for the purpose of artificial intelligence training. By using Nvidia's GPU, xAI not only gains computational power but also utilizes specialized infrastructure to accelerate artificial intelligence and machine learning, enabling xAI to break through the limits of AI research and development.

Six Major Applications of Nvidia X Grok

  1. Massive Parallel Processing: Nvidia's GPU (H100) is designed for parallel processing, crucial for the complex calculations required in AI model training. These chips can process thousands of operations simultaneously, speeding up the training process of Grok's large language model LLM.
  2. Scalability: The supercomputer Colossus initially consists of one hundred thousand Nvidia H100 GPUs, allowing xAI to manage a large amount of computational workloads far beyond typical CPU processing capabilities, and plans to double the quantity to 200,000 GPUs (including H200 chips).
  3. Energy Efficiency: Large-scale operations require high energy demands, but compared to traditional CPUs, Nvidia's GPUs are more energy-efficient for xAI operations.
  4. Network Infrastructure: The supercomputer Colossus utilizes Nvidia's Spectrum-X Ethernet network platform to support multi-tenancy by providing low-latency, high-bandwidth connections.
  5. Advanced Features for AI: Nvidia's H100 and H200 chips are equipped with features tailored for AI, including high-speed memory (HBM3 and HBM3e), reducing data transfer time between memory and GPU compute cores. This feature is very useful for AI workloads where data movement may become a bottleneck.
  6. Software Support: Nvidia provides the CUDA parallel computing platform and programming model, and xAI uses CUDA to develop AI algorithms.